Publications

Displaying 2601 - 2654 of 2654
  • Wolf, M. C., Meyer, A. S., Rowland, C. F., & Hintz, F. (2021). The effects of input modality, word difficulty and reading experience on word recognition accuracy. Collabra: Psychology, 7(1): 24919. doi:10.1525/collabra.24919.

    Abstract

    Language users encounter words in at least two different modalities. Arguably, the most frequent encounters are in spoken or written form. Previous research has shown that – compared to the spoken modality – written language features more difficult words. Thus, frequent reading might have effects on word recognition. In the present study, we investigated 1) whether input modality (spoken, written, or bimodal) has an effect on word recognition accuracy, 2) whether this modality effect interacts with word difficulty, 3) whether the interaction of word difficulty and reading experience impacts word recognition accuracy, and 4) whether this interaction is influenced by input modality. To do so, we re-analysed a dataset that was collected in the context of a vocabulary test development to assess in which modality test words should be presented. Participants had carried out a word recognition task, where non-words and words of varying difficulty were presented in auditory, visual and audio-visual modalities. In addition to this main experiment, participants had completed a receptive vocabulary and an author recognition test to measure their reading experience. Our re-analyses did not reveal evidence for an effect of input modality on word recognition accuracy, nor for interactions with word difficulty or language experience. Word difficulty interacted with reading experience in that frequent readers were more accurate in recognizing difficult words than individuals who read less frequently. Practical implications are discussed.
  • Wolf, M. C., Smith, A. C., Meyer, A. S., & Rowland, C. F. (2019). Modality effects in vocabulary acquisition. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 1212-1218). Montreal, QB: Cognitive Science Society.

    Abstract

    It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials.
  • Wolf, M. C., Muijselaar, M. M. L., Boonstra, A. M., & De Bree, E. H. (2019). The relationship between reading and listening comprehension: Shared and modality-specific components. Reading and Writing, 32(7), 1747-1767. doi:10.1007/s11145-018-9924-8.

    Abstract

    This study aimed to increase our understanding on the relationship between reading and listening comprehension. Both in comprehension theory and in educational practice, reading and listening comprehension are often seen as interchangeable, overlooking modality-specific aspects of them separately. Three questions were addressed. First, it was examined to what extent reading and listening comprehension comprise modality-specific, distinct skills or an overlapping, domain-general skill in terms of the amount of explained variance in one comprehension type by the opposite comprehension type. Second, general and modality-unique subskills of reading and listening comprehension were sought by assessing the contributions of the foundational skills word reading fluency, vocabulary, memory, attention, and inhibition to both comprehension types. Lastly, the practice of using either listening comprehension or vocabulary as a proxy of general comprehension was investigated. Reading and listening comprehension tasks with the same format were assessed in 85 second and third grade children. Analyses revealed that reading comprehension explained 34% of the variance in listening comprehension, and listening comprehension 40% of reading comprehension. Vocabulary and word reading fluency were found to be shared contributors to both reading and listening comprehension. None of the other cognitive skills contributed significantly to reading or listening comprehension. These results indicate that only part of the comprehension process is indeed domain-general and not influenced by the modality in which the information is provided. Especially vocabulary seems to play a large role in this domain-general part. The findings warrant a more prominent focus of modality-specific aspects of both reading and listening comprehension in research and education.
  • Wong, M. M. K., Hoekstra, S. D., Vowles, J., Watson, L. M., Fuller, G., Németh, A. H., Cowley, S. A., Ansorge, O., Talbot, K., & Becker, E. B. E. (2018). Neurodegeneration in SCA14 is associated with increased PKCγ kinase activity, mislocalization and aggregation. Acta Neuropathologica Communications, 6: 99. doi:10.1186/s40478-018-0600-7.

    Abstract

    Spinocerebellar ataxia type 14 (SCA14) is a subtype of the autosomal dominant cerebellar ataxias that is characterized by slowly progressive cerebellar dysfunction and neurodegeneration. SCA14 is caused by mutations in the PRKCG gene, encoding protein kinase C gamma (PKCγ). Despite the identification of 40 distinct disease-causing mutations in PRKCG, the pathological mechanisms underlying SCA14 remain poorly understood. Here we report the molecular neuropathology of SCA14 in post-mortem cerebellum and in human patient-derived induced pluripotent stem cells (iPSCs) carrying two distinct SCA14 mutations in the C1 domain of PKCγ, H36R and H101Q. We show that endogenous expression of these mutations results in the cytoplasmic mislocalization and aggregation of PKCγ in both patient iPSCs and cerebellum. PKCγ aggregates were not efficiently targeted for degradation. Moreover, mutant PKCγ was found to be hyper-activated, resulting in increased substrate phosphorylation. Together, our findings demonstrate that a combination of both, loss-of-function and gain-of-function mechanisms are likely to underlie the pathogenesis of SCA14, caused by mutations in the C1 domain of PKCγ. Importantly, SCA14 patient iPSCs were found to accurately recapitulate pathological features observed in post-mortem SCA14 cerebellum, underscoring their potential as relevant disease models and their promise as future drug discovery tools.

    Additional information

    additional file
  • Wongratwanich, P., Shimabukuro, K., Konishi, M., Nagasaki, T., Ohtsuka, M., Suei, Y., Nakamoto, T., Verdonschot, R. G., Kanesaki, T., Sutthiprapaporn, P., & Kakimoto, N. (2021). Do various imaging modalities provide potential early detection and diagnosis of medication-related osteonecrosis of the jaw? A review. Dentomaxillofacial Radiology, 50: 20200417. doi:10.1259/dmfr.20200417.

    Abstract


    Objective: Patients with medication-related osteonecrosis of the jaw (MRONJ) often visit their dentists at advanced stages and subsequently require treatments that greatly affect quality of life. Currently, no clear diagnostic criteria exist to assess MRONJ, and the definitive diagnosis solely relies on clinical bone exposure. This ambiguity leads to a diagnostic delay, complications, and unnecessary burden. This article aims to identify imaging modalities' usage and findings of MRONJ to provide possible approaches for early detection.

    Methods: Literature searches were conducted using PubMed, Web of Science, Scopus, and Cochrane Library to review all diagnostic imaging modalities for MRONJ.

    Results: Panoramic radiography offers a fundamental understanding of the lesions. Imaging findings were comparable between non-exposed and exposed MRONJ, showing osteolysis, osteosclerosis, and thickened lamina dura. Mandibular cortex index Class II could be a potential early MRONJ indicator. While three-dimensional modalities, CT and CBCT, were able to show more features unique to MRONJ such as a solid type periosteal reaction, buccal predominance of cortical perforation, and bone-within-bone appearance. MRI signal intensities of vital bones are hypointense on T1WI and hyperintense on T2WI and STIR when necrotic bone shows hypointensity on all T1WI, T2WI, and STIR. Functional imaging is the most sensitive method but is usually performed in metastasis detection rather than being a diagnostic tool for early MRONJ.

    Conclusion: Currently, MRONJ-specific imaging features cannot be firmly established. However, the current data are valuable as it may lead to a more efficient diagnostic procedure along with a more suitable selection of imaging modalities.
  • Wright, S. E., Windhouwer, M., Schuurman, I., & Broeder, D. (2014). Segueing from a Data Category Registry to a Data Concept Registry. In Proceedings of the 11th International Conference on Terminology and Knowledge Engineering (TKE 2014).

    Abstract

    The terminology Community of Practice has long standardized data categories in the framework of ISO TC 37. ISO 12620:2009 specifies the data model and procedures for a Data Category Registry (DCR), which has been implemented by the Max Planck Institute for Psycholinguistics as the ISOcat DCR. The DCR has been used by not only ISO TC 37, but also by the CLARIN research infra-structure. This paper describes how the needs of these communities have started to diverge and the process of segueing from a DCR to a Data Concept Registry in order to meet the needs of both communities.
  • Xiang, H.-D., Fonteijn, H. M., Norris, D. G., & Hagoort, P. (2010). Topographical functional connectivity pattern in the perisylvian language networks. Cerebral Cortex, 20, 549-560. doi:10.1093/cercor/bhp119.

    Abstract

    We performed a resting-state functional connectivity study to investigate directly the functional correlations within the perisylvian language networks by seeding from 3 subregions of Broca's complex (pars opercularis, pars triangularis, and pars orbitalis) and their right hemisphere homologues. A clear topographical functional connectivity pattern in the left middle frontal, parietal, and temporal areas was revealed for the 3 left seeds. This is the first demonstration that a functional connectivity topology can be observed in the perisylvian language networks. The results support the assumption of the functional division for phonology, syntax, and semantics of Broca's complex as proposed by the memory, unification, and control (MUC) model and indicated a topographical functional organization in the perisylvian language networks, which suggests a possible division of labor for phonological, syntactic, and semantic function in the left frontal, parietal, and temporal areas.
  • Yang, J., Zhu, H., & Tian, X. (2018). Group-level multivariate analysis in EasyEEG toolbox: Examining the temporal dynamics using topographic responses. Frontiers in Neuroscience, 12: 468. doi:10.3389/fnins.2018.00468.

    Abstract

    Electroencephalography (EEG) provides high temporal resolution cognitive information from non-invasive recordings. However, one of the common practices-using a subset of sensors in ERP analysis is hard to provide a holistic and precise dynamic results. Selecting or grouping subsets of sensors may also be subject to selection bias, multiple comparison, and further complicated by individual differences in the group-level analysis. More importantly, changes in neural generators and variations in response magnitude from the same neural sources are difficult to separate, which limit the capacity of testing different aspects of cognitive hypotheses. We introduce EasyEEG, a toolbox that includes several multivariate analysis methods to directly test cognitive hypotheses based on topographic responses that include data from all sensors. These multivariate methods can investigate effects in the dimensions of response magnitude and topographic patterns separately using data in the sensor space, therefore enable assessing neural response dynamics. The concise workflow and the modular design provide user-friendly and programmer-friendly features. Users of all levels can benefit from the open-sourced, free EasyEEG to obtain a straightforward solution for efficient processing of EEG data and a complete pipeline from raw data to final results for publication.
  • Yang, A., & Chen, A. (2014). Prosodic focus marking in child and adult Mandarin Chinese. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 54-58).

    Abstract

    This study investigates how Mandarin Chinese speaking children and adults use prosody to mark focus in spontaneous speech. SVO sentences were elicited from 4- and 8-year-olds and adults in a game setting. Sentence-medial verbs were acoustically analysed for both duration and pitch range in different focus conditions. We have found that like the adults, the 8-year-olds used both duration and pitch range to distinguish focus from non-focus. The 4-year-olds used only duration to distinguish focus from non-focus, unlike the adults and 8-year-olds. None of the three groups of speakers distinguished contrastive focus from non-contrastive focus using pitch range or duration. Regarding the distinction between narrow focus from broad focus, the 4- and 8-year-olds used both pitch range and duration for this purpose, while the adults used only duration
  • Yang, A., & Chen, A. (2014). Prosodic focus-marking in Chinese four- and eight-year-olds. In N. Campbell, D. Gibbon, & D. Hirst (Eds.), Proceedings of Speech Prosody 2014 (pp. 713-717).

    Abstract

    This study investigates how Mandarin Chinese speaking children use prosody to distinguish focus from non-focus, and focus types differing in size of constituent and contrastivity. SVO sentences were elicited from four- and eight-year-olds in a game setting. Sentence-medial verbs were acoustically analysed for both duration and pitch range in different focus conditions. The children started to use duration to differentiate focus from non-focus at the age of four. But their use of pitch range varied with age and depended on non-focus conditions (pre- vs. postfocus) and the lexical tones of the verbs. Further, the children in both age groups used pitch range but not duration to differentiate narrow focus from broad focus, and they did not differentiate contrastive narrow focus from non-contrastive narrow focus using duration or pitch range. The results indicated that Chinese children acquire the prosodic means (duration and pitch range) of marking focus in stages, and their acquisition of these two means appear to be early, compared to children speaking an intonation language, for example, Dutch.
  • Yang, Y., Dai, B., Howell, P., Wang, X., Li, K., & Lu, C. (2014). White and Grey Matter Changes in the Language Network during Healthy Aging. PLoS One, 9(9): e108077. doi: 10.1371/journal.pone.0108077.

    Abstract

    Neural structures change with age but there is no consensus on the exact processes involved. This study tested the hypothesis that white and grey matter in the language network changes during aging according to a “last in, first out” process. The fractional anisotropy (FA) of white matter and cortical thickness of grey matter were measured in 36 participants whose ages ranged from 55 to 79 years. Within the language network, the dorsal pathway connecting the mid-to-posterior superior temporal cortex (STC) and the inferior frontal cortex (IFC) was affected more by aging in both FA and thickness than the other dorsal pathway connecting the STC with the premotor cortex and the ventral pathway connecting the mid-to-anterior STC with the ventral IFC. These results were independently validated in a second group of 20 participants whose ages ranged from 50 to 73 years. The pathway that is most affected during aging matures later than the other two pathways (which are present at birth). The results are interpreted as showing that the neural structures which mature later are affected more than those that mature earlier, supporting the “last in, first out” theory.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., Hino, Y., & Lupker, S. J. (2021). Orthographic properties of distractors do influence phonological Stroop effects: Evidence from Japanese Romaji distractors. Memory & Cognition, 49(3), 600-612. doi:10.3758/s13421-020-01103-8.

    Abstract

    In attempting to understand mental processes, it is important to use a task that appropriately reflects the underlying processes being investigated. Recently, Verdonschot and Kinoshita (Memory & Cognition, 46,410-425, 2018) proposed that a variant of the Stroop task-the "phonological Stroop task"-might be a suitable tool for investigating speech production. The major advantage of this task is that the task is apparently not affected by the orthographic properties of the stimuli, unlike other, commonly used, tasks (e.g., associative-cuing and word-reading tasks). The viability of this proposal was examined in the present experiments by manipulating the script types of Japanese distractors. For Romaji distractors (e.g., "kushi"), color-naming responses were faster when the initial phoneme was shared between the color name and the distractor than when the initial phonemes were different, thereby showing a phoneme-based phonological Stroop effect (Experiment1). In contrast, no such effect was observed when the same distractors were presented in Katakana (e.g., "< ") pound, replicating Verdonschot and Kinoshita's original results (Experiment2). A phoneme-based effect was again found when the Katakana distractors used in Verdonschot and Kinoshita's original study were transcribed and presented in Romaji (Experiment3). Because the observation of a phonemic effectdirectly depended on the orthographic properties of the distractor stimuli, we conclude that the phonological Stroop task is also susceptible to orthographic influences.
  • Zaadnoordijk, L., Buckler, H., Cusack, R., Tsuji, S., & Bergmann, C. (2021). A global perspective on testing infants online: Introducing ManyBabies-AtHome. Frontiers in Psychology, 12: 703234. doi:10.3389/fpsyg.2021.703234.

    Abstract

    Online testing holds great promise for infant scientists. It could increase participant diversity, improve reproducibility and collaborative possibilities, and reduce costs for researchers and participants. However, despite the rise of platforms and participant databases, little work has been done to overcome the challenges of making this approach available to researchers across the world. In this paper, we elaborate on the benefits of online infant testing from a global perspective and identify challenges for the international community that have been outside of the scope of previous literature. Furthermore, we introduce ManyBabies-AtHome, an international, multi-lab collaboration that is actively working to facilitate practical and technical aspects of online testing as well as address ethical concerns regarding data storage and protection, and cross-cultural variation. The ultimate goal of this collaboration is to improve the method of testing infants online and make it globally available.
  • Zampieri, M., & Gebre, B. G. (2014). VarClass: An open-source language identification tool for language varieties. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 3305-3308).

    Abstract

    This paper presents VarClass, an open-source tool for language identification available both to be downloaded as well as through a graphical user-friendly interface. The main difference of VarClass in comparison to other state-of-the-art language identification tools is its focus on language varieties. General purpose language identification tools do not take language varieties into account and our work aims to fill this gap. VarClass currently contains language models for over 27 languages in which 10 of them are language varieties. We report an average performance of over 90.5% accuracy in a challenging dataset. More language models will be included in the upcoming months
  • Zeshan, U. (2004). Basic English course taught in Indian Sign Language (Ali Yavar Young National Institute for Hearing Handicapped, Ed.). National Institute for the Hearing Handicapped: Mumbai.
  • Zeshan, U. (2003). Aspects of Türk Işaret Dili (Turkish Sign Language). Sign Language and Linguistics, 6(1), 43-75. doi:10.1075/sll.6.1.04zes.

    Abstract

    This article provides a first overview of some striking grammatical structures in Türk Idotscedilaret Dili (Turkish Sign Language, TID), the sign language used by the Deaf community in Turkey. The data are described with a typological perspective in mind, focusing on aspects of TID grammar that are typologically unusual across sign languages. After giving an overview of the historical, sociolinguistic and educational background of TID and the language community using this sign language, five domains of TID grammar are investigated in detail. These include a movement derivation signalling completive aspect, three types of nonmanual negation — headshake, backward head tilt, and puffed cheeks — and their distribution, cliticization of the negator NOT to a preceding predicate host sign, an honorific whole-entity classifier used to refer to humans, and a question particle, its history and current status in the language. A final evaluation points out the significance of these data for sign language research and looks at perspectives for a deeper understanding of the language and its history.
  • Zeshan, U. (2004). Interrogative constructions in sign languages - Cross-linguistic perspectives. Language, 80(1), 7-39.

    Abstract

    This article reports on results from a broad crosslinguistic study based on data from thirty-five signed languages around the world. The study is the first of its kind, and the typological generalizations presented here cover the domain of interrogative structures as they appear across a wide range of geographically and genetically distinct signed languages. Manual and nonmanual ways of marking basic types of questions in signed languages are investigated. As a result, it becomes clear that the range of crosslinguistic variation is extensive for some subparameters, such as the structure of question-word paradigms, while other parameters, such as the use of nonmanual expressions in questions, show more similarities across signed languages. Finally, it is instructive to compare the findings from signed language typology to relevant data from spoken languages at a more abstract, crossmodality level.
  • Zeshan, U. (2004). Hand, head and face - negative constructions in sign languages. Linguistic Typology, 8(1), 1-58. doi:10.1515/lity.2004.003.

    Abstract

    This article presents a typology of negative constructions across a substantial number of sign languages from around the globe. After situating the topic within the wider context of linguistic typology, the main negation strategies found across sign languages are described. Nonmanual negation includes the use of head movements and facial expressions for negation and is of great importance in sign languages as well as particularly interesting from a typological point of view. As far as manual signs are concerned, independent negative particles represent the dominant strategy, but there are also instances of irregular negation in most sign languages. Irregular negatives may take the form of suppletion, cliticisation, affixing, or internal modification of a sign. The results of the study lead to interesting generalisations about similarities and differences between negatives in signed and spoken languages.
  • Zhang, Y., Ding, R., Frassinelli, D., Tuomainen, J., Klavinskis-Whiting, S., & Vigliocco, G. (2021). Electrophysiological signatures of second language multimodal comprehension. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2971-2977). Vienna: Cognitive Science Society.

    Abstract

    Language is multimodal: non-linguistic cues, such as prosody,
    gestures and mouth movements, are always present in face-to-
    face communication and interact to support processing. In this
    paper, we ask whether and how multimodal cues affect L2
    processing by recording EEG for highly proficient bilinguals
    when watching naturalistic materials. For each word, we
    quantified surprisal and the informativeness of prosody,
    gestures, and mouth movements. We found that each cue
    modulates the N400: prosodic accentuation, meaningful
    gestures, and informative mouth movements all reduce N400.
    Further, effects of meaningful gestures but not mouth
    informativeness are enhanced by prosodic accentuation,
    whereas effects of mouth are enhanced by meaningful gestures
    but reduced by beat gestures. Compared with L1, L2
    participants benefit less from cues and their interactions, except
    for meaningful gestures and mouth movements. Thus, in real-
    world language comprehension, L2 comprehenders use
    multimodal cues just as L1 speakers albeit to a lesser extent.
  • Yu, C., Zhang, Y., Slone, L. K., & Smith, L. B. (2021). The infant’s view redefines the problem of referential uncertainty in early word learning. Proceedings of the National Academy of Sciences of the United States of America, 118(52): e2107019118. doi:10.1073/pnas.2107019118.

    Abstract

    The learning of first object names is deemed a hard problem due to the uncertainty inherent in mapping a heard name to the intended referent in a cluttered and variable world. However, human infants readily solve this problem. Despite considerable theoretical discussion, relatively little is known about the uncertainty infants face in the real world. We used head-mounted eye tracking during parent–infant toy play and quantified the uncertainty by measuring the distribution of infant attention to the potential referents when a parent named both familiar and unfamiliar toy objects. The results show that infant gaze upon hearing an object name is often directed to a single referent which is equally likely to be a wrong competitor or the intended target. This bimodal gaze distribution clarifies and redefines the uncertainty problem and constrains possible solutions.
  • Zhang, Y., Yurovsky, D., & Yu, C. (2021). Cross-situational learning from ambiguous egocentric input is a continuous process: Evidence using the human simulation paradigm. Cognitive Science, 45(7): e13010. doi:10.1111/cogs.13010.

    Abstract

    Recent laboratory experiments have shown that both infant and adult learners can acquire word-referent mappings using cross-situational statistics. The vast majority of the work on this topic has used unfamiliar objects presented on neutral backgrounds as the visual contexts for word learning. However, these laboratory contexts are much different than the real-world contexts in which learning occurs. Thus, the feasibility of generalizing cross-situational learning beyond the laboratory is in question. Adapting the Human Simulation Paradigm, we conducted a series of experiments examining cross-situational learning from children's egocentric videos captured during naturalistic play. Focusing on individually ambiguous naming moments that naturally occur during toy play, we asked how statistical learning unfolds in real time through accumulating cross-situational statistics in naturalistic contexts. We found that even when learning situations were individually ambiguous, learners' performance gradually improved over time. This improvement was driven in part by learners' use of partial knowledge acquired from previous learning situations, even when they had not yet discovered correct word-object mappings. These results suggest that word learning is a continuous process by means of real-time information integration.
  • Zhang, Y., Amatuni, A., Cain, E., Wang, X., Crandall, D., & Yu, C. (2021). Human learners integrate visual and linguistic information cross-situational verb learning. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2267-2273). Vienna: Cognitive Science Society.

    Abstract

    Learning verbs is challenging because it is difficult to infer the precise meaning of a verb when there are a multitude of relations that one can derive from a single event. To study this verb learning challenge, we used children's egocentric view collected from naturalistic toy-play interaction as learning materials and investigated how visual and linguistic information provided in individual naming moments as well as cross-situational information provided from multiple learning moments can help learners resolve this mapping problem using the Human Simulation Paradigm. Our results show that learners benefit from seeing children's egocentric views compared to third-person observations. In addition, linguistic information can help learners identify the correct verb meaning by eliminating possible meanings that do not belong to the linguistic category. Learners are also able to integrate visual and linguistic information both within and across learning situations to reduce the ambiguity in the space of possible verb meanings.
  • Zhang, Y., Chen, C.-h., & Yu, C. (2019). Mechanisms of cross-situational learning: Behavioral and computational evidence. In Advances in Child Development and Behavior; vol. 56 (pp. 37-63).

    Abstract

    Word learning happens in everyday contexts with many words and many potential referents for those words in view at the same time. It is challenging for young learners to find the correct referent upon hearing an unknown word at the moment. This problem of referential uncertainty has been deemed as the crux of early word learning (Quine, 1960). Recent empirical and computational studies have found support for a statistical solution to the problem termed cross-situational learning. Cross-situational learning allows learners to acquire word meanings across multiple exposures, despite each individual exposure is referentially uncertain. Recent empirical research shows that infants, children and adults rely on cross-situational learning to learn new words (Smith & Yu, 2008; Suanda, Mugwanya, & Namy, 2014; Yu & Smith, 2007). However, researchers have found evidence supporting two very different theoretical accounts of learning mechanisms: Hypothesis Testing (Gleitman, Cassidy, Nappa, Papafragou, & Trueswell, 2005; Markman, 1992) and Associative Learning (Frank, Goodman, & Tenenbaum, 2009; Yu & Smith, 2007). Hypothesis Testing is generally characterized as a form of learning in which a coherent hypothesis regarding a specific word-object mapping is formed often in conceptually constrained ways. The hypothesis will then be either accepted or rejected with additional evidence. However, proponents of the Associative Learning framework often characterize learning as aggregating information over time through implicit associative mechanisms. A learner acquires the meaning of a word when the association between the word and the referent becomes relatively strong. In this chapter, we consider these two psychological theories in the context of cross-situational word-referent learning. By reviewing recent empirical and cognitive modeling studies, our goal is to deepen our understanding of the underlying word learning mechanisms by examining and comparing the two theoretical learning accounts.
  • Zheng, X., Roelofs, A., Farquhar, J., & Lemhöfer, K. (2018). Monitoring of language selection errors in switching: Not all about conflict. PLoS One, 13(11): e0200397. doi:10.1371/journal.pone.0200397.

    Abstract

    Although bilingual speakers are very good at selectively using one language rather than another, sometimes language selection errors occur. To investigate how bilinguals monitor their speech errors and control their languages in use, we recorded event-related potentials (ERPs) in unbalanced Dutch-English bilingual speakers in a cued language-switching task. We tested the conflict-based monitoring model of Nozari and colleagues by investigating the error-related negativity (ERN) and comparing the effects of the two switching directions (i.e., to the first language, L1 vs. to the second language, L2). Results show that the speakers made more language selection errors when switching from their L2 to the L1 than vice versa. In the EEG, we observed a robust ERN effect following language selection errors compared to correct responses, reflecting monitoring of speech errors. Most interestingly, the ERN effect was enlarged when the speakers were switching to their L2 (less conflict) compared to switching to the L1 (more conflict). Our findings do not support the conflict-based monitoring model. We discuss an alternative account in terms of error prediction and reinforcement learning.
  • Zheng, X., Roelofs, A., & Lemhöfer, K. (2018). Language selection errors in switching: language priming or cognitive control? Language, Cognition and Neuroscience, 33(2), 139-147. doi:10.1080/23273798.2017.1363401.

    Abstract

    Although bilingual speakers are very good at selectively using one language rather than another, sometimes language selection errors occur. We examined the relative contribution of top-down cognitive control and bottom-up language priming to these errors. Unbalanced Dutch-English bilinguals named pictures and were cued to switch between languages under time pressure. We also manipulated the number of same-language trials before a switch (long vs. short runs). Results show that speakers made more language selection errors when switching from their second language (L2) to the first language (L1) than vice versa. Furthermore, they made more errors when switching to the L1 after a short compared to a long run of L2 trials. In the reverse switching direction (L1 to L2), run length had no effect. These findings are most compatible with an account of language selection errors that assigns a strong role to top-down processes of cognitive control.

    Additional information

    plcp_a_1363401_sm2537.docx
  • Zheng, X., & Lemhöfer, K. (2019). The “semantic P600” in second language processing: When syntax conflicts with semantics. Neuropsychologia, 127, 131-147. doi:10.1016/j.neuropsychologia.2019.02.010.

    Abstract

    In sentences like “the mouse that chased the cat was hungry”, the syntactically correct interpretation (the mouse chases the cat) is contradicted by semantic and pragmatic knowledge. Previous research has shown that L1 speakers sometimes base sentence interpretation on this type of knowledge (so-called “shallow” or “good-enough” processing). We made use of both behavioural and ERP measurements to investigate whether L2 learners differ from native speakers in the extent to which they engage in “shallow” syntactic processing. German learners of Dutch as well as Dutch native speakers read sentences containing relative clauses (as in the example above) for which the plausible thematic roles were or were not reversed, and made plausibility judgments. The results show that behaviourally, L2 learners had more difficulties than native speakers to discriminate plausible from implausible sentences. In the ERPs, we replicated the previously reported finding of a “semantic P600” for semantic reversal anomalies in native speakers, probably reflecting the effort to resolve the syntax-semantics conflict. In L2 learners, though, this P600 was largely attenuated and surfaced only in those trials that were judged correctly for plausibility. These results generally point at a more prevalent, but not exclusive occurrence of shallow syntactic processing in L2 learners.
  • Zhernakova, A., Elbers, C. C., Ferwerda, B., Romanos, J., Trynka, G., Dubois, P. C., De Kovel, C. G. F., Franke, L., Oosting, M., Barisani, D., Bardella, M. T., Joosten, L. A. B., Saavalainen, P., van Heel, D. A., Catassi, C., Netea, M. G., Wijmenga, C., & Finnish Celiac Dis Study, G. (2010). Evolutionary and Functional Analysis of Celiac Risk Loci Reveals SH2B3 as a Protective Factor against Bacterial Infection. American Journal of Human Genetics, 86(6), 970-977. doi:10.1016/j.ajhg.2010.05.004.

    Abstract

    Celiac disease (CD) is an intolerance to dietary proteins of wheat, barley, and rye. CD may have substantial morbidity, yet it is quite common with a prevalence of 1%-2% in Western populations. It is not clear why the CD phenotype is so prevalent despite its negative effects on human health, especially because appropriate treatment in the form of a gluten-free diet has only been available since the 1950s, when dietary gluten was discovered to be the triggering factor. The high prevalence of CD might suggest that genes underlying this disease may have been favored by the process of natural selection. We assessed signatures of selection for ten confirmed CD-associated loci in several genome-wide data sets, comprising 8154 controls from four European populations and 195 individuals from a North African population, by studying haplotype lengths via the integrated haplotype score (iHS) method. Consistent signs of positive selection for CD-associated derived alleles were observed in three loci: IL12A, IL18RAP, and SH2B3. For the SH2B3 risk allele, we also show a difference in allele frequency distribution (F(st)) between HapMap phase II populations. Functional investigation of the effect of the SH2B3 genotype in response to lipopolysaccharide and muramyl dipeptide revealed that carriers of the SH2B3 rs3184504*A risk allele showed stronger activation of the NOD2 recognition pathway. This suggests that SH2B3 plays a role in protection against bacteria infection, and it provides a possible explanation for the selective sweep on SH2B3, which occurred sometime between 1200 and 1700 years ago.
  • Zhong, S., Wei, L., Zhao, C., Yang, L., Di, Z., Francks, C., & Gong, G. (2021). Interhemispheric relationship of genetic influence on human brain connectivity. Cerebral Cortex, 31(1), 77-88. doi:10.1093/cercor/bhaa207.

    Abstract

    To understand the origins of interhemispheric differences and commonalities/coupling in human brain wiring, it is crucial to determine how homologous interregional connectivities of the left and right hemispheres are genetically determined and related. To address this, in the present study, we analyzed human twin and pedigree samples with high-quality diffusion magnetic resonance imaging tractography and estimated the heritability and genetic correlation of homologous left and right white matter (WM) connections. The results showed that the heritability of WM connectivity was similar and coupled between the 2 hemispheres and that the degree of overlap in genetic factors underlying homologous WM connectivity (i.e., interhemispheric genetic correlation) varied substantially across the human brain: from complete overlap to complete nonoverlap. Particularly, the heritability was significantly stronger and the chance of interhemispheric complete overlap in genetic factors was higher in subcortical WM connections than in cortical WM connections. In addition, the heritability and interhemispheric genetic correlations were stronger for long-range connections than for short-range connections. These findings highlight the determinants of the genetics underlying WM connectivity and its interhemispheric relationships, and provide insight into genetic basis of WM connectivity asymmetries in both healthy and disease states.

    Additional information

    Supplementary data
  • Zhou, W., Broersma, M., & Cutler, A. (2021). Asymmetric memory for birth language perception versus production in young international adoptees. Cognition, 213: 104788. doi:10.1016/j.cognition.2021.104788.

    Abstract

    Adults who as children were adopted into a different linguistic community retain knowledge of their birth language. The possession (without awareness) of such knowledge is known to facilitate the (re)learning of birth-language speech patterns; this perceptual learning predicts such adults' production success as well, indicating that the retained linguistic knowledge is abstract in nature. Adoptees' acquisition of their adopted language is fast and complete; birth-language mastery disappears rapidly, although this latter process has been little studied. Here, 46 international adoptees from China aged four to 10 years, with Dutch as their new language, plus 47 matched non-adopted Dutch-native controls and 40 matched non-adopted Chinese controls, undertook across a two-week period 10 blocks of training in perceptually identifying Chinese speech contrasts (one segmental, one tonal) which were unlike any Dutch contrasts. Chinese controls easily accomplished all these tasks. The same participants also provided speech production data in an imitation task. In perception, adoptees and Dutch controls scored equivalently poorly at the outset of training; with training, the adoptees significantly improved while the Dutch controls did not. In production, adoptees' imitations both before and after training could be better identified, and received higher goodness ratings, than those of Dutch controls. The perception results confirm that birth-language knowledge is stored and can facilitate re-learning in post-adoption childhood; the production results suggest that although processing of phonological category detail appears to depend on access to the stored knowledge, general articulatory dimensions can at this age also still be remembered, and may facilitate spoken imitation.

    Additional information

    stimulus materials
  • Zhou, W., & Broersma, M. (2014). Perception of birth language tone contrasts by adopted Chinese children. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 63-66).

    Abstract

    The present study investigates how long after adoption adoptees forget the phonology of their birth language. Chinese children who were adopted by Dutch families were tested on the perception of birth language tone contrasts before, during, and after perceptual training. Experiment 1 investigated Cantonese tone 2 (High-Rising) and tone 5 (Low-Rising), and Experiment 2 investigated Mandarin tone 2 (High-Rising) and tone 3 (Low-Dipping). In both experiments, participants were adoptees and non-adopted Dutch controls. Results of both experiments show that the tone contrasts were very difficult to perceive for the adoptees, and that adoptees were not better at perceiving the tone contrasts than their non-adopted Dutch peers, before or after training. This demonstrates that forgetting took place relatively soon after adoption, and that the re-exposure that the adoptees were presented with did not lead to an improvement greater than that of the Dutch control participants. Thus, the findings confirm what has been anecdotally reported by adoptees and their parents, but what had not been empirically tested before, namely that birth language forgetting occurs very soon after adoption
  • Zhu, Z., Bastiaansen, M. C. M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52: 100855. doi:10.1016/j.jneuroling.2019.100855.

    Abstract

    Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
  • Zimianiti, E. (2021). Adjective-noun constructions in Griko: Focusing on measuring adjectives and their placement in the nominal domain. LingUU Journal, 5(2), 62-75.

    Abstract

    This paper examines adjectival placement in Griko, an Italian-Greek lan-
    guage variety. Guardiano and Stavrou (2019, 2014) have argued that
    there is a gap of evidence in the diachrony of adjectives in prenominal
    position and in particular, of measuring adjectives. Evidence is presented
    in this paper contradicting the aforementioned claims. After considering
    the placement of adjectives in Greek and Italian, and their similarities
    and differences, the adjectival pattern of Griko is analysed. The analysis
    is based mostly on written data from the early 20th century proving the
    prenominal position of adjectives and adding to the diachronic schema of
    adjectival placement in Griko.
  • Zimianiti, E., Dimitrakopoulou, M., & Tsangalidis, A. (2021). Τhematic roles in dementia: The case of psychological verbs. In A. Botinis (Ed.), ExLing 2021: Proceedings of the 12th International Conference of Experimental Linguistics (pp. 269-272). Athens, Greece: ExLing Society.

    Abstract

    This study investigates the difficulty of people with Mild Cognitive Impairment (MCI), mild and moderate Alzheimer’s disease (AD) in the production and comprehension of psychological verbs, as thematic realization may involve both the canonical and non-canonical realization of arguments. More specifically, we aim to examine whether there is a deficit in the mapping of syntactic and semantic representations in psych-predicates regarding Greek-speaking individuals with MCI and AD, and whether the linguistic abilities associated with θ-role assignment decrease as the disease progresses. Moreover, given the decline of cognitive abilities in people with MCI and AD, we explore the effects of components of memory (Semantic, Episodic, and Working Memory) on the assignment of thematic roles in constructions with psychological verbs.
  • Zinken, J., Kaiser, J., Weidner, M., Mondada, L., Rossi, G., & Sorjonen, M.-L. (2021). Rule talk: Instructing proper play with impersonal deontic statements. Frontiers in Communication, 6: 660394. doi:10.3389/fcomm.2021.660394.

    Abstract

    The present paper explores how rules are enforced and talked about in everyday life. Drawing on a corpus of board game recordings across European languages, we identify a sequential and praxeological context for rule talk. After a game rule is breached, a participant enforces proper play and then formulates a rule with an impersonal deontic statement (e.g. ‘It’s not allowed to do this’). Impersonal deontic statements express what may or may not be done without tying the obligation to a particular individual. Our analysis shows that such statements are used as part of multi-unit and multi-modal turns where rule talk is accomplished through both grammatical and embodied means. Impersonal deontic statements serve multiple interactional goals: they account for having changed another’s behavior in the moment and at the same time impart knowledge for the future. We refer to this complex action as an “instruction”. The results of this study advance our understanding of rules and rule-following in everyday life, and of how resources of language and the body are combined to enforce and formulate rules.
  • Zinn, C., Wittenburg, P., & Ringersma, J. (2010). An evolving eScience environment for research data in linguistics. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 894-899). European Language Resources Association (ELRA).

    Abstract

    The amount of research data in the Humanities is increasing at fastspeed. Metadata helps describing and making accessible this data tointerested researchers within and across institutions. While metadatainteroperability is an issue that is being recognised and addressed,the systematic and user-driven provision of annotations and thelinking together of resources into new organisational layers havereceived much less attention. This paper gives an overview of ourevolving technological eScience environment to support suchfunctionality. It describes two tools, ADDIT and ViCoS, which enableresearchers, rather than archive managers, to organise and reorganiseresearch data to fit their particular needs. The two tools, which areembedded into our institute's existing software landscape, are aninitial step towards an eScience environment that gives our scientistseasy access to (multimodal) research data of their interest, andempowers them to structure, enrich, link together, and share such dataas they wish.
  • Zoefel, B., Ten Oever, S., & Sack, A. T. (2018). The involvement of endogenous neural oscillations in the processing of rhythmic input: More than a regular repetition of evoked neural responses. Frontiers in Neuroscience, 12: 95. doi:10.3389/fnins.2018.00095.

    Abstract

    It is undisputed that presenting a rhythmic stimulus leads to a measurable brain response that follows the rhythmic structure of this stimulus. What is still debated, however, is the question whether this brain response exclusively reflects a regular repetition of evoked responses, or whether it also includes entrained oscillatory activity. Here we systematically present evidence in favor of an involvement of entrained neural oscillations in the processing of rhythmic input while critically pointing out which questions still need to be addressed before this evidence could be considered conclusive. In this context, we also explicitly discuss the potential functional role of such entrained oscillations, suggesting that these stimulus-aligned oscillations reflect, and serve as, predictive processes, an idea often only implicitly assumed in the literature.
  • Zora, H., Riad, T., Ylinen, S., & Csépe, V. (2021). Phonological variations are compensated at the lexical level: Evidence from auditory neural activity. Frontiers in Human Neuroscience, 15: 622904. doi:10.3389/fnhum.2021.622904.

    Abstract

    Dealing with phonological variations is important for speech processing. This article addresses whether phonological variations introduced by assimilatory processes are compensated for at the pre-lexical or lexical level, and whether the nature of variation and the phonological context influence this process. To this end, Swedish nasal regressive place assimilation was investigated using the mismatch negativity (MMN) component. In nasal regressive assimilation, the coronal nasal assimilates to the place of articulation of a following segment, most clearly with a velar or labial place of articulation, as in utan mej “without me” > [ʉːtam mɛjː]. In a passive auditory oddball paradigm, 15 Swedish speakers were presented with Swedish phrases with attested and unattested phonological variations and contexts for nasal assimilation. Attested variations – a coronal-to-labial change as in utan “without” > [ʉːtam] – were contrasted with unattested variations – a labial-to-coronal change as in utom “except” > ∗[ʉːtɔn] – in appropriate and inappropriate contexts created by mej “me” [mɛjː] and dej “you” [dɛjː]. Given that the MMN amplitude depends on the degree of variation between two stimuli, the MMN responses were expected to indicate to what extent the distance between variants was tolerated by the perceptual system. Since the MMN response reflects not only low-level acoustic processing but also higher-level linguistic processes, the results were predicted to indicate whether listeners process assimilation at the pre-lexical and lexical levels. The results indicated no significant interactions across variations, suggesting that variations in phonological forms do not incur any cost in lexical retrieval; hence such variation is compensated for at the lexical level. However, since the MMN response reached significance only for a labial-to-coronal change in a labial context and for a coronal-to-labial change in a coronal context, the compensation might have been influenced by the nature of variation and the phonological context. It is therefore concluded that while assimilation is compensated for at the lexical level, there is also some influence from pre-lexical processing. The present results reveal not only signal-based perception of phonological units, but also higher-level lexical processing, and are thus able to reconcile the bottom-up and top-down models of speech processing.
  • Zora, H., Riad, T., & Ylinen, S. (2019). Prosodically controlled derivations in the mental lexicon. Journal of Neurolinguistics, 52: 100856. doi:10.1016/j.jneuroling.2019.100856.

    Abstract

    Swedish morphemes are classified as prosodically specified or prosodically unspecified, depending on lexical or phonological stress, respectively. Here, we investigate the allomorphy of the suffix -(i)sk, which indicates the distinction between lexical and phonological stress; if attached to a lexically stressed morpheme, it takes a non-syllabic form (-sk), whereas if attached to a phonologically stressed morpheme, an epenthetic vowel is inserted (-isk). Using mismatch negativity (MMN), we explored the neural processing of this allomorphy across lexically stressed and phonologically stressed morphemes. In an oddball paradigm, participants were occasionally presented with congruent and incongruent derivations, created by the suffix -(i)sk, within the repetitive presentation of their monomorphemic stems. The results indicated that the congruent derivation of the lexically stressed stem elicited a larger MMN than the incongruent sequences of the same stem and the derivational suffix, whereas after the phonologically stressed stem a non-significant tendency towards an opposite pattern was observed. We argue that the significant MMN response to the congruent derivation in the lexical stress condition is in line with lexical MMN, indicating a holistic processing of the sequence of lexically stressed stem and derivational suffix. The enhanced MMN response to the incongruent derivation in the phonological stress condition, on the other hand, is suggested to reflect combinatorial processing of the sequence of phonologically stressed stem and derivational suffix. These findings bring a new aspect to the dual-system approach to neural processing of morphologically complex words, namely the specification of word stress.
  • Zora, H., & Csépe, V. (2021). Perception of Prosodic Modulations of Linguistic and Paralinguistic Origin: Evidence From Early Auditory Event-Related Potentials. Frontiers in Neuroscience, 15: 797487. doi:10.3389/fnins.2021.797487.

    Abstract

    How listeners handle prosodic cues of linguistic and paralinguistic origin is a central question for spoken communication. In the present EEG study, we addressed this question by examining neural responses to variations in pitch accent (linguistic) and affective (paralinguistic) prosody in Swedish words, using a passive auditory oddball paradigm. The results indicated that changes in pitch accent and affective prosody elicited mismatch negativity (MMN) responses at around 200 ms, confirming the brain’s pre-attentive response to any prosodic modulation. The MMN amplitude was, however, statistically larger to the deviation in affective prosody in comparison to the deviation in pitch accent and affective prosody combined, which is in line with previous research indicating not only a larger MMN response to affective prosody in comparison to neutral prosody but also a smaller MMN response to multidimensional deviants than unidimensional ones. The results, further, showed a significant P3a response to the affective prosody change in comparison to the pitch accent change at around 300 ms, in accordance with previous findings showing an enhanced positive response to emotional stimuli. The present findings provide evidence for distinct neural processing of different prosodic cues, and statistically confirm the intrinsic perceptual and motivational salience of paralinguistic information in spoken communication.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Slow naming of pictures facilitates memory for their names. Psychonomic Bulletin & Review, 26(5), 1675-1682. doi:10.3758/s13423-019-01620-x.

    Abstract

    Speakers remember their own utterances better than those of their interlocutors, suggesting that language production is beneficial to memory. This may be partly explained by a generation effect: The act of generating a word is known to lead to a memory advantage (Slamecka & Graf, 1978). In earlier work, we showed a generation effect for recognition of images (Zormpa, Brehm, Hoedemaker, & Meyer, 2019). Here, we tested whether the recognition of their names would also benefit from name generation. Testing whether picture naming improves memory for words was our primary aim, as it serves to clarify whether the representations affected by generation are visual or conceptual/lexical. A secondary aim was to assess the influence of processing time on memory. Fifty-one participants named pictures in three conditions: after hearing the picture name (identity condition), backward speech, or an unrelated word. A day later, recognition memory was tested in a yes/no task. Memory in the backward speech and unrelated conditions, which required generation, was superior to memory in the identity condition, which did not require generation. The time taken by participants for naming was a good predictor of memory, such that words that took longer to be retrieved were remembered better. Importantly, that was the case only when generation was required: In the no-generation (identity) condition, processing time was not related to recognition memory performance. This work has shown that generation affects conceptual/lexical representations, making an important contribution to the understanding of the relationship between memory and language.
  • Zormpa, E., Brehm, L., Hoedemaker, R. S., & Meyer, A. S. (2019). The production effect and the generation effect improve memory in picture naming. Memory, 27(3), 340-352. doi:10.1080/09658211.2018.1510966.

    Abstract

    The production effect (better memory for words read aloud than words read silently) and the picture superiority effect (better memory for pictures than words) both improve item memory in a picture naming task (Fawcett, J. M., Quinlan, C. K., & Taylor, T. L. (2012). Interplay of the production and picture superiority effects: A signal detection analysis. Memory (Hove, England), 20(7), 655–666. doi:10.1080/09658211.2012.693510). Because picture naming requires coming up with an appropriate label, the generation effect (better memory for generated than read words) may contribute to the latter effect. In two forced-choice memory experiments, we tested the role of generation in a picture naming task on later recognition memory. In Experiment 1, participants named pictures silently or aloud with the correct name or an unreadable label superimposed. We observed a generation effect, a production effect, and an interaction between the two. In Experiment 2, unreliable labels were included to ensure full picture processing in all conditions. In this experiment, we observed a production and a generation effect but no interaction, implying the effects are dissociable. This research demonstrates the separable roles of generation and production in picture naming and their impact on memory. As such, it informs the link between memory and language production and has implications for memory asymmetries between language production and comprehension.

    Additional information

    pmem_a_1510966_sm9257.pdf
  • De Zubicaray, G. I., Hartsuiker, R. J., & Acheson, D. J. (2014). Mind what you say—general and specific mechanisms for monitoring in speech production. Frontiers in Human Neuroscience, 8: 514. doi:10.3389%2Ffnhum.2014.00514.

    Abstract

    For most people, speech production is relatively effortless and error-free. Yet it has long been recognized that we need some type of control over what we are currently saying and what we plan to say. Precisely how we monitor our internal and external speech has been a topic of research interest for several decades. The predominant approach in psycholinguistics has assumed monitoring of both is accomplished via systems responsible for comprehending others' speech.

    This special topic aimed to broaden the field, firstly by examining proposals that speech production might also engage more general systems, such as those involved in action monitoring. A second aim was to examine proposals for a production-specific, internal monitor. Both aims require that we also specify the nature of the representations subject to monitoring.
  • Zuidema, W., & Fitz, H. (2019). Key issues and future directions: Models of human language and speech processing. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 353-358). Cambridge, MA: MIT Press.
  • Zumer, J. M., Scheeringa, R., Schoffelen, J.-M., Norris, D. G., & Jensen, O. (2014). Occipital alpha activity during stimulus processing gates the information flow to object-selective cortex. PLoS Biology, 12(10): e1001965. doi:10.1371/journal.pbio.1001965.

    Abstract

    Given the limited processing capabilities of the sensory system, it is essential that attended information is gated to downstream areas, whereas unattended information is blocked. While it has been proposed that alpha band (8–13 Hz) activity serves to route information to downstream regions by inhibiting neuronal processing in task-irrelevant regions, this hypothesis remains untested. Here we investigate how neuronal oscillations detected by electroencephalography in visual areas during working memory encoding serve to gate information reflected in the simultaneously recorded blood-oxygenation-level-dependent (BOLD) signals recorded by functional magnetic resonance imaging in downstream ventral regions. We used a paradigm in which 16 participants were presented with faces and landscapes in the right and left hemifields; one hemifield was attended and the other unattended. We observed that decreased alpha power contralateral to the attended object predicted the BOLD signal representing the attended object in ventral object-selective regions. Furthermore, increased alpha power ipsilateral to the attended object predicted a decrease in the BOLD signal representing the unattended object. We also found that the BOLD signal in the dorsal attention network inversely correlated with visual alpha power. This is the first demonstration, to our knowledge, that oscillations in the alpha band are implicated in the gating of information from the visual cortex to the ventral stream, as reflected in the representationally specific BOLD signal. This link of sensory alpha to downstream activity provides a neurophysiological substrate for the mechanism of selective attention during stimulus processing, which not only boosts the attended information but also suppresses distraction. Although previous studies have shown a relation between the BOLD signal from the dorsal attention network and the alpha band at rest, we demonstrate such a relation during a visuospatial task, indicating that the dorsal attention network exercises top-down control of visual alpha activity.
  • Zwitserlood, I., van den Bogaerde, B., & Terpstra, A. (2010). De Nederlandse Gebarentaal en het ERK. Levende Talen Magazine, 2010(5), 50-51.
  • Zwitserlood, I. (2010). De Nederlandse Gebarentaal, het Corpus NGT en het ERK. Levende Talen Magazine, 2010(8), 44-45.
  • Zwitserlood, I. (2003). Classifying hand configurations in Nederlandse Gebarentaal (Sign Language of the Netherlands). PhD Thesis, LOT, Utrecht. Retrieved from http://igitur-archive.library.uu.nl/dissertations/2003-0717-122837/UUindex.html.

    Abstract

    This study investigates the morphological and morphosyntactic characteristics of hand configurations in signs, particularly in Nederlandse Gebarentaal (NGT). The literature on sign languages in general acknowledges that hand configurations can function as morphemes, more specifically as classifiers , in a subset of signs: verbs expressing the motion, location, and existence of referents (VELMs). These verbs are considered the output of productive sign formation processes. In contrast, other signs in which similar hand configurations appear ( iconic or motivated signs) have been considered to be lexicalized signs, not involving productive processes. This research report shows that meaningful hand configurations have (at least) two very different functions in the grammar of NGT (and presumably in other sign languages, too). First, they are agreement markers on VELMs, and hence are functional elements. Second, they are roots in motivated signs, and thus lexical elements. The latter signs are analysed as root compounds and are formed from various roots by productive processes. The similarities in surface form and differences in morphosyntactic characteristics observed in comparison of VELMs and root compounds are attributed to their different structures and to the sign language interface between grammar and phonetic form
  • Zwitserlood, I. (2014). Meaning at the feature level in sign languages. The case of name signs in Sign Language of the Netherlands (NGT). In R. Kager (Ed.), Where the Principles Fail. A Festschrift for Wim Zonneveld on the occasion of his 64th birthday (pp. 241-251). Utrecht: Utrecht Institute of Linguistics OTS.
  • Zwitserlood, I. (2010). Laat je vingers spreken: NGT en vingerspelling. Levende Talen Magazine, 2010(2), 46-47.
  • Zwitserlood, I. (2010). Het Corpus NGT en de dagelijkse lespraktijk (2). Levende Talen Magazine, 2010(3), 47-48.
  • Zwitserlood, I. (2010). Sign language lexicography in the early 21st century and a recently published dictionary of Sign Language of the Netherlands. International Journal of Lexicography, 23, 443-476. doi:10.1093/ijl/ecq031.

    Abstract

    Sign language lexicography has thus far been a relatively obscure area in the world of lexicography. Therefore, this article will contain background information on signed languages and the communities in which they are used, on the lexicography of sign languages, the situation in the Netherlands as well as a review of a sign language dictionary that has recently been published in the Netherlands.
  • Zwitserlood, I., & Crasborn, O. (2010). Wat kunnen we leren uit een Corpus Nederlandse Gebarentaal? WAP Nieuwsbrief, 28(2), 16-18.
  • Zwitserlood, I. (2003). Word formation below and above little x: Evidence from Sign Language of the Netherlands. In Proceedings of SCL 19. Nordlyd Tromsø University Working Papers on Language and Linguistics (pp. 488-502).

    Abstract

    Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT), with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM), which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.
  • Zwitserlood, I. (2010). Verlos ons van de glos. Levende Talen Magazine, 2010(7), 40-41.

Share this page