Publications

Displaying 2101 - 2143 of 2143
  • Wittenburg, P. (2003). The DOBES model of language documentation. Language Documentation and Description, 1, 122-139.
  • Wnuk, E., Verkerk, A., Levinson, S. C., & Majid, A. (2022). Color technology is not necessary for rich and efficient color language. Cognition, 229: 105223. doi:10.1016/j.cognition.2022.105223.

    Abstract

    The evolution of basic color terms in language is claimed to be stimulated by technological development, involving technological control of color or exposure to artificially colored objects. Accordingly, technologically “simple” non-industrialized societies are expected to have poor lexicalization of color, i.e., only rudimentary lexica of 2, 3 or 4 basic color terms, with unnamed gaps in the color space. While it may indeed be the case that technology stimulates lexical growth of color terms, it is sometimes considered a sine qua non for color salience and lexicalization. We provide novel evidence that this overlooks the role of the natural environment, and people's engagement with the environment, in the evolution of color vocabulary. We introduce the Maniq—nomadic hunter-gatherers with no color technology, but who have a basic color lexicon of 6 or 7 terms, thus of the same order as large languages like Vietnamese and Hausa, and who routinely talk about color. We examine color language in Maniq and compare it to available data in other languages to demonstrate it has remarkably high consensual color term usage, on a par with English, and high coding efficiency. This shows colors can matter even for non-industrialized societies, suggesting technology is not necessary for color language. Instead, factors such as perceptual prominence of color in natural environments, its practical usefulness across communicative contexts, and symbolic importance can all stimulate elaboration of color language.
  • Wnuk, E., De Valk, J. M., Huisman, J. L. A., & Majid, A. (2017). Hot and cold smells: Odor-temperature associations across cultures. Frontiers in Psychology, 8: 1373. doi:10.3389/fpsyg.2017.01373.

    Abstract

    It is often assumed odors are associated with hot and cold temperature, since odor processing may trigger thermal sensations, such as coolness in the case of mint. It is unknown, however, whether people make consistent temperature associations for a variety of everyday odors, and, if so, what determines them. Previous work investigating the bases of cross-modal associations suggests a number of possibilities, including universal forces (e.g., perception), as well as culture-specific forces (e.g., language and cultural beliefs). In this study, we examined odor-temperature associations in three cultures—Maniq (N = 11), Thai (N = 24), and Dutch (N = 24)—who differ with respect to their cultural preoccupation with odors, their odor lexicons, and their beliefs about the relationship of odors (and odor objects) to temperature. Participants matched 15 odors to temperature by touching cups filled with hot or cold water, and described the odors in their native language. The results showed no consistent associations among the Maniq, and only a handful of consistent associations between odor and temperature among the Thai and Dutch. The consistent associations differed across the two groups, arguing against their universality. Further analysis revealed cross-modal associations could not be explained by language, but could be the result of cultural beliefs
  • Woensdregt, M., Jara-Ettinger, J., & Rubio-Fernandez, P. (2022). Language universals rely on social cognition: Computational models of the use of this and that to redirect the receiver’s attention. In J. Culbertson, A. Perfors, H. Rabagliati, & V. Ramenzoni (Eds.), Proceedings of the 44th Annual Conference of the Cognitive Science Society (CogSci 2022) (pp. 1382-1388). Toronto, Canada: Cognitive Science Society.

    Abstract

    Demonstratives—simple referential devices like this and that—are linguistic universals, but their meaning varies cross-linguistically. In languages like English and Italian, demonstratives are thought to encode the referent’s distance from the producer (e.g., that one means “the one far away from me”),
    while in others, like Portuguese and Spanish, they encode relative distance from both producer and receiver (e.g., aquel means “the one far away from both of us”). Here we propose that demonstratives are also sensitive to the receiver’s focus of attention, hence requiring a deeper form of social cognition
    than previously thought. We provide initial empirical and computational evidence for this idea, suggesting that producers use
    demonstratives to redirect the receiver’s attention towards the intended referent, rather than only to indicate its physical distance.
  • Wolf, M. C. (2022). Spoken and written word processing: Effects of presentation modality and individual differences in experience to written language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Wolf, M. C., Smith, A. C., Meyer, A. S., & Rowland, C. F. (2019). Modality effects in vocabulary acquisition. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 1212-1218). Montreal, QB: Cognitive Science Society.

    Abstract

    It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials.
  • Wolf, M. C., Muijselaar, M. M. L., Boonstra, A. M., & De Bree, E. H. (2019). The relationship between reading and listening comprehension: Shared and modality-specific components. Reading and Writing, 32(7), 1747-1767. doi:10.1007/s11145-018-9924-8.

    Abstract

    This study aimed to increase our understanding on the relationship between reading and listening comprehension. Both in comprehension theory and in educational practice, reading and listening comprehension are often seen as interchangeable, overlooking modality-specific aspects of them separately. Three questions were addressed. First, it was examined to what extent reading and listening comprehension comprise modality-specific, distinct skills or an overlapping, domain-general skill in terms of the amount of explained variance in one comprehension type by the opposite comprehension type. Second, general and modality-unique subskills of reading and listening comprehension were sought by assessing the contributions of the foundational skills word reading fluency, vocabulary, memory, attention, and inhibition to both comprehension types. Lastly, the practice of using either listening comprehension or vocabulary as a proxy of general comprehension was investigated. Reading and listening comprehension tasks with the same format were assessed in 85 second and third grade children. Analyses revealed that reading comprehension explained 34% of the variance in listening comprehension, and listening comprehension 40% of reading comprehension. Vocabulary and word reading fluency were found to be shared contributors to both reading and listening comprehension. None of the other cognitive skills contributed significantly to reading or listening comprehension. These results indicate that only part of the comprehension process is indeed domain-general and not influenced by the modality in which the information is provided. Especially vocabulary seems to play a large role in this domain-general part. The findings warrant a more prominent focus of modality-specific aspects of both reading and listening comprehension in research and education.
  • Won, S.-O., Hu, I., Kim, M.-Y., Bae, J.-M., Kim, Y.-M., & Byun, K.-S. (2009). Theory and practice of Sign Language interpretation. Pyeongtaek: Korea National College of Rehabilitation & Welfare.
  • Wong, M. M. K., Hoekstra, S. D., Vowles, J., Watson, L. M., Fuller, G., Németh, A. H., Cowley, S. A., Ansorge, O., Talbot, K., & Becker, E. B. E. (2018). Neurodegeneration in SCA14 is associated with increased PKCγ kinase activity, mislocalization and aggregation. Acta Neuropathologica Communications, 6: 99. doi:10.1186/s40478-018-0600-7.

    Abstract

    Spinocerebellar ataxia type 14 (SCA14) is a subtype of the autosomal dominant cerebellar ataxias that is characterized by slowly progressive cerebellar dysfunction and neurodegeneration. SCA14 is caused by mutations in the PRKCG gene, encoding protein kinase C gamma (PKCγ). Despite the identification of 40 distinct disease-causing mutations in PRKCG, the pathological mechanisms underlying SCA14 remain poorly understood. Here we report the molecular neuropathology of SCA14 in post-mortem cerebellum and in human patient-derived induced pluripotent stem cells (iPSCs) carrying two distinct SCA14 mutations in the C1 domain of PKCγ, H36R and H101Q. We show that endogenous expression of these mutations results in the cytoplasmic mislocalization and aggregation of PKCγ in both patient iPSCs and cerebellum. PKCγ aggregates were not efficiently targeted for degradation. Moreover, mutant PKCγ was found to be hyper-activated, resulting in increased substrate phosphorylation. Together, our findings demonstrate that a combination of both, loss-of-function and gain-of-function mechanisms are likely to underlie the pathogenesis of SCA14, caused by mutations in the C1 domain of PKCγ. Importantly, SCA14 patient iPSCs were found to accurately recapitulate pathological features observed in post-mortem SCA14 cerebellum, underscoring their potential as relevant disease models and their promise as future drug discovery tools.

    Additional information

    additional file
  • Wong, M. M. K., Watson, L. M., & Becker, E. B. E. (2017). Recent advances in modelling of cerebellar ataxia using induced pluripotent stem cells. Journal of Neurology & Neuromedicine, 2(7), 11-15. doi:10.29245/2572.942X/2017/7.1134.

    Abstract

    The cerebellar ataxias are a group of incurable brain disorders that are caused primarily by the progressive dysfunction and degeneration of cerebellar Purkinje cells. The lack of reliable disease models for the heterogeneous ataxias has hindered the understanding of the underlying pathogenic mechanisms as well as the development of effective therapies for these devastating diseases. Recent advances in the field of induced pluripotent stem cell (iPSC) technology offer new possibilities to better understand and potentially reverse disease pathology. Given the neurodevelopmental phenotypes observed in several types of ataxias, iPSC-based models have the potential to provide significant insights into disease progression, as well as opportunities for the development of early intervention therapies. To date, however, very few studies have successfully used iPSC-derived cells to cerebellar ataxias. In this review, we focus on recent breakthroughs in generating human iPSC-derived Purkinje cells. We also highlight the future challenges that will need to be addressed in order to fully exploit these models for the modelling of the molecular mechanisms underlying cerebellar ataxias and the development of effective therapeutics.
  • Wood, N. (2009). Field recording for dummies. In A. Majid (Ed.), Field manual volume 12 (pp. V). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Xiao, M., Kong, X., Liu, J., & Ning, J. (2009). TMBF: Bloom filter algorithms of time-dependent multi bit-strings for incremental set. In Proceedings of the 2009 International Conference on Ultra Modern Telecommunications & Workshops.

    Abstract

    Set is widely used as a kind of basic data structure. However, when it is used for large scale data set the cost of storage, search and transport is overhead. The bloom filter uses a fixed size bit string to represent elements in a static set, which can reduce storage space and search cost that is a fixed constant. The time-space efficiency is achieved at the cost of a small probability of false positive in membership query. However, for many applications the space savings and locating time constantly outweigh this drawback. Dynamic bloom filter (DBF) can support concisely representation and approximate membership queries of dynamic set instead of static set. It has been proved that DBF not only possess the advantage of standard bloom filter, but also has better features when dealing with dynamic set. This paper proposes a time-dependent multiple bit-strings bloom filter (TMBF) which roots in the DBF and targets on dynamic incremental set. TMBF uses multiple bit-strings in time order to present a dynamic increasing set and uses backward searching to test whether an element is in a set. Based on the system logs from a real P2P file sharing system, the evaluation shows a 20% reduction in searching cost compared to DBF.
  • Yager, J., & Burenhult, N. (2017). Jedek: a newly discovered Aslian variety of Malaysia. Linguistic Typology, 21(3), 493-545. doi:10.1515/lingty-2017-0012.

    Abstract

    Jedek is a previously unrecognized variety of the Northern Aslian subgroup of the Aslian branch of the Austroasiatic language family. It is spoken by c. 280 individuals in the resettlement area of Sungai Rual, near Jeli in Kelantan state, Peninsular Malaysia. The community originally consisted of several bands of foragers along the middle reaches of the Pergau river. Jedek’s distinct status first became known during a linguistic survey carried out in the DOBES project Tongues of the Semang (2005-2011). This paper describes the process leading up to its discovery and provides an overview of its typological characteristics.
  • Yang, J. (2022). Discovering the units in language cognition: From empirical evidence to a computational model. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Yang, J., Zhu, H., & Tian, X. (2018). Group-level multivariate analysis in EasyEEG toolbox: Examining the temporal dynamics using topographic responses. Frontiers in Neuroscience, 12: 468. doi:10.3389/fnins.2018.00468.

    Abstract

    Electroencephalography (EEG) provides high temporal resolution cognitive information from non-invasive recordings. However, one of the common practices-using a subset of sensors in ERP analysis is hard to provide a holistic and precise dynamic results. Selecting or grouping subsets of sensors may also be subject to selection bias, multiple comparison, and further complicated by individual differences in the group-level analysis. More importantly, changes in neural generators and variations in response magnitude from the same neural sources are difficult to separate, which limit the capacity of testing different aspects of cognitive hypotheses. We introduce EasyEEG, a toolbox that includes several multivariate analysis methods to directly test cognitive hypotheses based on topographic responses that include data from all sensors. These multivariate methods can investigate effects in the dimensions of response magnitude and topographic patterns separately using data in the sensor space, therefore enable assessing neural response dynamics. The concise workflow and the modular design provide user-friendly and programmer-friendly features. Users of all levels can benefit from the open-sourced, free EasyEEG to obtain a straightforward solution for efficient processing of EEG data and a complete pipeline from raw data to final results for publication.
  • Yang, J., Van den Bosch, A., & Frank, S. L. (2022). Unsupervised text segmentation predicts eye fixations during reading. Frontiers in Artificial Intelligence, 5: 731615. doi:10.3389/frai.2022.731615.

    Abstract

    Words typically form the basis of psycholinguistic and computational linguistic studies about sentence processing. However, recent evidence shows the basic units during reading, i.e., the items in the mental lexicon, are not always words, but could also be sub-word and supra-word units. To recognize these units, human readers require a cognitive mechanism to learn and detect them. In this paper, we assume eye fixations during reading reveal the locations of the cognitive units, and that the cognitive units are analogous with the text units discovered by unsupervised segmentation models. We predict eye fixations by model-segmented units on both English and Dutch text. The results show the model-segmented units predict eye fixations better than word units. This finding suggests that the predictive performance of model-segmented units indicates their plausibility as cognitive units. The Less-is-Better (LiB) model, which finds the units that minimize both long-term and working memory load, offers advantages both in terms of prediction score and efficiency among alternative models. Our results also suggest that modeling the least-effort principle for the management of long-term and working memory can lead to inferring cognitive units. Overall, the study supports the theory that the mental lexicon stores not only words but also smaller and larger units, suggests that fixation locations during reading depend on these units, and shows that unsupervised segmentation models can discover these units.
  • Yoshihara, M., Nakayama, M., Verdonschot, R. G., & Hino, Y. (2017). The phonological unit of Japanese Kanji compounds: A masked priming investigation. Journal of Experimental Psychology: Human Perception and Performance, 43(7), 1303-1328. doi:10.1037/xhp0000374.

    Abstract

    Using the masked priming paradigm, we examined which phonological unit is used when naming Kanji compounds. Although the phonological unit in the Japanese language has been suggested to be the mora, Experiment 1 found no priming for mora-related Kanji prime-target pairs. In Experiment 2, significant priming was only found when Kanji pairs shared the whole sound of their initial Kanji characters. Nevertheless, when the same Kanji pairs used in Experiment 2 were transcribed into Kana, significant mora priming was observed in Experiment 3. In Experiment 4, matching the syllable structure and pitch-accent of the initial Kanji characters did not lead to mora priming, ruling out potential alternative explanations for the earlier absence of the effect. A significant mora priming effect was observed, however, when the shared initial mora constituted the whole sound of their initial Kanji characters in Experiments 5. Lastly, these results were replicated in Experiment 6. Overall, these results indicate that the phonological unit involved when naming Kanji compounds is not the mora but the whole sound of each Kanji character. We discuss how different phonological units may be involved when processing Kanji and Kana words as well as the implications for theories dealing with language production processes.
  • Zeller, J., Bylund, E., & Lewis, A. G. (2022). The parser consults the lexicon in spite of transparent gender marking: EEG evidence from noun class agreement processing in Zulu. Cognition, 226: 105148. doi:10.1016/j.cognition.2022.105148.

    Abstract

    In sentence comprehension, the parser in many languages has the option to use both the morphological form of a noun and its lexical representation when evaluating agreement. The additional step of consulting the lexicon incurs processing costs, and an important question is whether the parser takes that step even when the formal cues alone are sufficiently reliable to evaluate agreement. Our study addressed this question using electrophysiology in Zulu, a language where both grammatical gender and number features are reliably expressed formally by noun class prefixes, but only gender features are lexically specified. We observed reduced, more topographically focal LAN, and more frontally distributed alpha/beta power effects for gender compared to number agreement violations. These differences provide evidence that for gender mismatches, even though the formal cues are reliable, the parser nevertheless takes the additional step of consulting the noun's lexical representation, a step which is not available for number.

    Files private

    Request files
  • Zeshan, U. (2003). Aspects of Türk Işaret Dili (Turkish Sign Language). Sign Language and Linguistics, 6(1), 43-75. doi:10.1075/sll.6.1.04zes.

    Abstract

    This article provides a first overview of some striking grammatical structures in Türk Idotscedilaret Dili (Turkish Sign Language, TID), the sign language used by the Deaf community in Turkey. The data are described with a typological perspective in mind, focusing on aspects of TID grammar that are typologically unusual across sign languages. After giving an overview of the historical, sociolinguistic and educational background of TID and the language community using this sign language, five domains of TID grammar are investigated in detail. These include a movement derivation signalling completive aspect, three types of nonmanual negation — headshake, backward head tilt, and puffed cheeks — and their distribution, cliticization of the negator NOT to a preceding predicate host sign, an honorific whole-entity classifier used to refer to humans, and a question particle, its history and current status in the language. A final evaluation points out the significance of these data for sign language research and looks at perspectives for a deeper understanding of the language and its history.
  • Zhang, Q., Zhou, Y., & Lou, H. (2022). The dissociation between age of acquisition and word frequency effects in Chinese spoken picture naming. Psychological Research, 86, 1918-1929. doi:10.1007/s00426-021-01616-0.

    Abstract

    This study aimed to examine the locus of age of acquisition (AoA) and word frequency (WF) effects in Chinese spoken picture naming, using a picture–word interference task. We conducted four experiments manipulating the properties of picture names (AoA in Experiments 1 and 2, while controlling WF; and WF in Experiments 3 and 4, while controlling AoA), and the relations between distractors and targets (semantic or phonological relatedness). Both Experiments 1 and 2 demonstrated AoA effects in picture naming; pictures of early acquired concepts were named faster than those acquired later. There was an interaction between AoA and semantic relatedness, but not between AoA and phonological relatedness, suggesting localisation of AoA effects at the stage of lexical access in picture naming. Experiments 3 and 4 demonstrated WF effects: pictures of high-frequency concepts were named faster than those of low-frequency concepts. WF interacted with both phonological and semantic relatedness, suggesting localisation of WF effects at multiple levels of picture naming, including lexical access and phonological encoding. Our findings show that AoA and WF effects exist in Chinese spoken word production and may arise at related processes of lexical selection.
  • Zhang, Y., & Yu, C. (2022). Examining real-time attention dynamics in parent-infant picture book reading. In J. Culbertson, A. Perfors, H. Rabagliati, & V. Ramenzoni (Eds.), Proceedings of the 44th Annual Conference of the Cognitive Science Society (CogSci 2022) (pp. 1367-1374). Toronto, Canada: Cognitive Science Society.

    Abstract

    Picture book reading is a common word-learning context from which parents repeatedly name objects to their child and it has been found to facilitate early word learning. To learn the correct word-object mappings in a book-reading context, infants need to be able to link what they see with what they hear. However, given multiple objects on every book page, it is not clear how infants direct their attention to objects named by parents. The aim of the current study is to examine how infants mechanistically discover the correct word-object mappings during book reading in real time. We used head-mounted eye-tracking during parent-infant picture book reading and measured the infant's moment-by-moment visual attention to the named referent. We also examined how gesture cues provided by both the child and the parent may influence infants' attention to the named target. We found that although parents provided many object labels during book reading, infants were not able to attend to the named objects easily. However, their abilities to follow and use gestures to direct the other social partner’s attention increase the chance of looking at the named target during parent naming.
  • Zhang, Y., & Yu, C. (2017). How misleading cues influence referential uncertainty in statistical cross-situational learning. In M. LaMendola, & J. Scott (Eds.), Proceedings of the 41st Annual Boston University Conference on Language Development (BUCLD 41) (pp. 820-833). Boston, MA: Cascadilla Press.
  • Zhang, Y., Chen, C.-h., & Yu, C. (2019). Mechanisms of cross-situational learning: Behavioral and computational evidence. In Advances in Child Development and Behavior; vol. 56 (pp. 37-63).

    Abstract

    Word learning happens in everyday contexts with many words and many potential referents for those words in view at the same time. It is challenging for young learners to find the correct referent upon hearing an unknown word at the moment. This problem of referential uncertainty has been deemed as the crux of early word learning (Quine, 1960). Recent empirical and computational studies have found support for a statistical solution to the problem termed cross-situational learning. Cross-situational learning allows learners to acquire word meanings across multiple exposures, despite each individual exposure is referentially uncertain. Recent empirical research shows that infants, children and adults rely on cross-situational learning to learn new words (Smith & Yu, 2008; Suanda, Mugwanya, & Namy, 2014; Yu & Smith, 2007). However, researchers have found evidence supporting two very different theoretical accounts of learning mechanisms: Hypothesis Testing (Gleitman, Cassidy, Nappa, Papafragou, & Trueswell, 2005; Markman, 1992) and Associative Learning (Frank, Goodman, & Tenenbaum, 2009; Yu & Smith, 2007). Hypothesis Testing is generally characterized as a form of learning in which a coherent hypothesis regarding a specific word-object mapping is formed often in conceptually constrained ways. The hypothesis will then be either accepted or rejected with additional evidence. However, proponents of the Associative Learning framework often characterize learning as aggregating information over time through implicit associative mechanisms. A learner acquires the meaning of a word when the association between the word and the referent becomes relatively strong. In this chapter, we consider these two psychological theories in the context of cross-situational word-referent learning. By reviewing recent empirical and cognitive modeling studies, our goal is to deepen our understanding of the underlying word learning mechanisms by examining and comparing the two theoretical learning accounts.
  • Wu, S., Zhang, D., Li, X., Zhao, J., Sun, X., Shi, L., Mao, Y., Zhang, Y., & Jiang, F. (2022). Siblings and Early Childhood Development: Evidence from a Population-Based Cohort in Preschoolers from Shanghai. International Journal of Environmental Research and Public Health, 19(9): 5739. doi:10.3390/ijerph19095739.

    Abstract

    Background: The current study aims to investigate the association between the presence of a sibling and early childhood development (ECD). (2) Methods: Data were obtained from a large-scale population-based cohort in Shanghai. Children were followed from three to six years old. Based on birth order, the sample was divided into four groups: single child, younger child, elder child, and single-elder transfer (transfer from single-child to elder-child). Psychosocial well-being and school readiness were assessed with the total difficulties score from the Strengths and Difficulties Questionnaire (SDQ) and the overall development score from the early Human Capability Index (eHCI), respectively. A multilevel model was conducted to evaluate the main effect of each sibling group and the group × age interaction effect on psychosocial well-being and school readiness. (3) Results: Across all measures, children in the younger child group presented with lower psychosocial problems (β = −0.96, 95% CI: −1.44, −0.48, p < 0.001) and higher school readiness scores (β = 1.56, 95% CI: 0.61, 2.51, p = 0.001). No significant difference, or marginally significant difference, was found between the elder group and the single-child group. Compared to the single-child group, the single-elder transfer group presented with slower development on both psychosocial well-being (Age × Group: β = 0.37, 95% CI: 0.18, 0.56, p < 0.001) and school readiness (Age × Group: β = −0.75, 95% CI: −1.10, −0.40, p < 0.001). The sibling-ECD effects did not differ between children from families of low versus high socioeconomic status. (4) Conclusion: The current study suggested the presence of a sibling was not associated with worse development outcomes in general. Rather, children with an elder sibling are more likely to present with better ECD.
  • Zhao, J., Yu, Z., Sun, X., Wu, S., Zhang, J., Zhang, D., Zhang, Y., & Jiang, F. (2022). Association between screen time trajectory and early childhood development in children in China. JAMA Pediatrics, 176(8), 768-775. doi:10.1001/jamapediatrics.2022.1630.

    Abstract

    Importance: Screen time has become an integral part of children's daily lives. Nevertheless, the developmental consequences of screen exposure in young children remain unclear.

    Objective: To investigate the screen time trajectory from 6 to 72 months of age and its association with children's development at age 72 months in a prospective birth cohort.

    Design, setting, and participants: Women in Shanghai, China, who were at 34 to 36 gestational weeks and had an expected delivery date between May 2012 and July 2013 were recruited for this cohort study. Their children were followed up at 6, 9, 12, 18, 24, 36, 48, and 72 months of age. Children's screen time was classified into 3 groups at age 6 months: continued low (ie, stable amount of screen time), late increasing (ie, sharp increase in screen time at age 36 months), and early increasing (ie, large amount of screen time in early stages that remained stable after age 36 months). Cognitive development was assessed by specially trained research staff in a research clinic. Of 262 eligible mother-offspring pairs, 152 dyads had complete data regarding all variables of interest and were included in the analyses. Data were analyzed from September 2019 to November 2021.

    Exposures: Mothers reported screen times of children at 6, 9, 12, 18, 24, 36, 48, and 72 months of age.

    Main outcomes and measures: The cognitive development of children was evaluated using the Wechsler Intelligence Scale for Children, 4th edition, at age 72 months. Social-emotional development was measured by the Strengths and Difficulties Questionnaire, which was completed by the child's mother. The study described demographic characteristics, maternal mental health, child's temperament at age 6 months, and mental development at age 12 months by subgroups clustered by a group-based trajectory model. Group difference was examined by analysis of variance.

    Results: A total of 152 mother-offspring dyads were included in this study, including 77 girls (50.7%) and 75 boys (49.3%) (mean [SD] age of the mothers was 29.7 [3.3] years). Children's screen time trajectory from age 6 to 72 months was classified into 3 groups: continued low (110 [72.4%]), late increasing (17 [11.2%]), and early increasing (25 [16.4%]). Compared with the continued low group, the late increasing group had lower scores on the Full-Scale Intelligence Quotient (β coefficient, -8.23; 95% CI, -15.16 to -1.30; P < .05) and the General Ability Index (β coefficient, -6.42; 95% CI, -13.70 to 0.86; P = .08); the early increasing group presented with lower scores on the Full-Scale Intelligence Quotient (β coefficient, -6.68; 95% CI, -12.35 to -1.02; P < .05) and the Cognitive Proficiency Index (β coefficient, -10.56; 95% CI, -17.23 to -3.90; P < .01) and a higher total difficulties score (β coefficient, 2.62; 95% CI, 0.49-4.76; P < .05).

    Conclusions and relevance: This cohort study found that excessive screen time in early years was associated with poor cognitive and social-emotional development. This finding may be helpful in encouraging awareness among parents of the importance of onset and duration of children's screen time.
  • Zhen, Z., Kong, X., Huang, L., Yang, Z., Wang, X., Hao, X., Huang, T., Song, Y., & Liu, J. (2017). Quantifying the variability of scene-selective regions: Interindividual, interhemispheric, and sex differences. Human Brain Mapping, 38(4), 2260-2275. doi:10.1002/hbm.23519.

    Abstract

    Scene-selective regions (SSRs), including the parahippocampal place area (PPA), retrosplenial cortex (RSC), and transverse occipital sulcus (TOS), are among the most widely characterized functional regions in the human brain. However, previous studies have mostly focused on the commonality within each SSR, providing little information on different aspects of their variability. In a large group of healthy adults (N = 202), we used functional magnetic resonance imaging to investigate different aspects of topographical and functional variability within SSRs, including interindividual, interhemispheric, and sex differences. First, the PPA, RSC, and TOS were delineated manually for each individual. We then demonstrated that SSRs showed substantial interindividual variability in both spatial topography and functional selectivity. We further identified consistent interhemispheric differences in the spatial topography of all three SSRs, but distinct interhemispheric differences in scene selectivity. Moreover, we found that all three SSRs showed stronger scene selectivity in men than in women. In summary, our work thoroughly characterized the interindividual, interhemispheric, and sex variability of the SSRs and invites future work on the origin and functional significance of these variabilities. Additionally, we constructed the first probabilistic atlases for the SSRs, which provide the detailed anatomical reference for further investigations of the scene network.
  • Zheng, X., Roelofs, A., Farquhar, J., & Lemhöfer, K. (2018). Monitoring of language selection errors in switching: Not all about conflict. PLoS One, 13(11): e0200397. doi:10.1371/journal.pone.0200397.

    Abstract

    Although bilingual speakers are very good at selectively using one language rather than another, sometimes language selection errors occur. To investigate how bilinguals monitor their speech errors and control their languages in use, we recorded event-related potentials (ERPs) in unbalanced Dutch-English bilingual speakers in a cued language-switching task. We tested the conflict-based monitoring model of Nozari and colleagues by investigating the error-related negativity (ERN) and comparing the effects of the two switching directions (i.e., to the first language, L1 vs. to the second language, L2). Results show that the speakers made more language selection errors when switching from their L2 to the L1 than vice versa. In the EEG, we observed a robust ERN effect following language selection errors compared to correct responses, reflecting monitoring of speech errors. Most interestingly, the ERN effect was enlarged when the speakers were switching to their L2 (less conflict) compared to switching to the L1 (more conflict). Our findings do not support the conflict-based monitoring model. We discuss an alternative account in terms of error prediction and reinforcement learning.
  • Zheng, X., Roelofs, A., & Lemhöfer, K. (2018). Language selection errors in switching: language priming or cognitive control? Language, Cognition and Neuroscience, 33(2), 139-147. doi:10.1080/23273798.2017.1363401.

    Abstract

    Although bilingual speakers are very good at selectively using one language rather than another, sometimes language selection errors occur. We examined the relative contribution of top-down cognitive control and bottom-up language priming to these errors. Unbalanced Dutch-English bilinguals named pictures and were cued to switch between languages under time pressure. We also manipulated the number of same-language trials before a switch (long vs. short runs). Results show that speakers made more language selection errors when switching from their second language (L2) to the first language (L1) than vice versa. Furthermore, they made more errors when switching to the L1 after a short compared to a long run of L2 trials. In the reverse switching direction (L1 to L2), run length had no effect. These findings are most compatible with an account of language selection errors that assigns a strong role to top-down processes of cognitive control.

    Additional information

    plcp_a_1363401_sm2537.docx
  • Zheng, X., & Lemhöfer, K. (2019). The “semantic P600” in second language processing: When syntax conflicts with semantics. Neuropsychologia, 127, 131-147. doi:10.1016/j.neuropsychologia.2019.02.010.

    Abstract

    In sentences like “the mouse that chased the cat was hungry”, the syntactically correct interpretation (the mouse chases the cat) is contradicted by semantic and pragmatic knowledge. Previous research has shown that L1 speakers sometimes base sentence interpretation on this type of knowledge (so-called “shallow” or “good-enough” processing). We made use of both behavioural and ERP measurements to investigate whether L2 learners differ from native speakers in the extent to which they engage in “shallow” syntactic processing. German learners of Dutch as well as Dutch native speakers read sentences containing relative clauses (as in the example above) for which the plausible thematic roles were or were not reversed, and made plausibility judgments. The results show that behaviourally, L2 learners had more difficulties than native speakers to discriminate plausible from implausible sentences. In the ERPs, we replicated the previously reported finding of a “semantic P600” for semantic reversal anomalies in native speakers, probably reflecting the effort to resolve the syntax-semantics conflict. In L2 learners, though, this P600 was largely attenuated and surfaced only in those trials that were judged correctly for plausibility. These results generally point at a more prevalent, but not exclusive occurrence of shallow syntactic processing in L2 learners.
  • Zhu, Z., Bastiaansen, M. C. M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52: 100855. doi:10.1016/j.jneuroling.2019.100855.

    Abstract

    Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
  • Zimianiti, E. (2022). Is semantic memory the winning component in second language teaching with Accelerative Integrated Method (AIM)? LingUU Journal, 6(1), 54-62.

    Abstract

    This paper constitutes a research proposal based on Rousse-Malpalt’s
    (2019) dissertation, which extensively examines the effectiveness of the
    Accelerative Integrated Method (AIM) in second language (L2) learning.
    Although it has been found that AIM is a greatly effective method in comparison with non-implicit teaching methods, the reasons behind its success and effectiveness are yet unknown. As Semantic Memory (SM) is the component of memory responsible for the conceptualization and storage of knowledge, this paper sets to propose an investigation of its role in the learning process of AIM and provide with insights as to why the embodied experience of learning with AIM is more effective than others. The tasks proposed for administration take into account the factors of gestures being related to a learner’s memorization process and Semantic Memory. Lastly, this paper provides with a future research idea about the learning mechanisms of sign languages in people with hearing deficits and healthy population, aiming to indicate which brain mechanisms benefit from the teaching method of AIM and reveal important brain functions for SLA via AIM.
  • Zoefel, B., Ten Oever, S., & Sack, A. T. (2018). The involvement of endogenous neural oscillations in the processing of rhythmic input: More than a regular repetition of evoked neural responses. Frontiers in Neuroscience, 12: 95. doi:10.3389/fnins.2018.00095.

    Abstract

    It is undisputed that presenting a rhythmic stimulus leads to a measurable brain response that follows the rhythmic structure of this stimulus. What is still debated, however, is the question whether this brain response exclusively reflects a regular repetition of evoked responses, or whether it also includes entrained oscillatory activity. Here we systematically present evidence in favor of an involvement of entrained neural oscillations in the processing of rhythmic input while critically pointing out which questions still need to be addressed before this evidence could be considered conclusive. In this context, we also explicitly discuss the potential functional role of such entrained oscillations, suggesting that these stimulus-aligned oscillations reflect, and serve as, predictive processes, an idea often only implicitly assumed in the literature.
  • Zora, H., Riad, T., & Ylinen, S. (2019). Prosodically controlled derivations in the mental lexicon. Journal of Neurolinguistics, 52: 100856. doi:10.1016/j.jneuroling.2019.100856.

    Abstract

    Swedish morphemes are classified as prosodically specified or prosodically unspecified, depending on lexical or phonological stress, respectively. Here, we investigate the allomorphy of the suffix -(i)sk, which indicates the distinction between lexical and phonological stress; if attached to a lexically stressed morpheme, it takes a non-syllabic form (-sk), whereas if attached to a phonologically stressed morpheme, an epenthetic vowel is inserted (-isk). Using mismatch negativity (MMN), we explored the neural processing of this allomorphy across lexically stressed and phonologically stressed morphemes. In an oddball paradigm, participants were occasionally presented with congruent and incongruent derivations, created by the suffix -(i)sk, within the repetitive presentation of their monomorphemic stems. The results indicated that the congruent derivation of the lexically stressed stem elicited a larger MMN than the incongruent sequences of the same stem and the derivational suffix, whereas after the phonologically stressed stem a non-significant tendency towards an opposite pattern was observed. We argue that the significant MMN response to the congruent derivation in the lexical stress condition is in line with lexical MMN, indicating a holistic processing of the sequence of lexically stressed stem and derivational suffix. The enhanced MMN response to the incongruent derivation in the phonological stress condition, on the other hand, is suggested to reflect combinatorial processing of the sequence of phonologically stressed stem and derivational suffix. These findings bring a new aspect to the dual-system approach to neural processing of morphologically complex words, namely the specification of word stress.
  • Zora, H., Gussenhoven, C., Tremblay, A., & Liu, F. (2022). Editorial: Crosstalk between intonation and lexical tones: Linguistic, cognitive and neuroscience perspectives. Frontiers in Psychology, 13: 1101499. doi:10.3389/fpsyg.2022.1101499.

    Abstract

    The interplay between categorical and continuous aspects of the speech signal remains central and yet controversial in the fields of phonetics and phonology. The division between phonological abstractions and phonetic variations has been particularly relevant to the unraveling of diverse communicative functions of pitch in the domain of prosody. Pitch influences vocal communication in two major but fundamentally different ways, and lexical and intonational tones exquisitely capture these functions. Lexical tone contrasts convey lexical meanings as well as derivational meanings at the word level and are grammatically encoded as discrete structures. Intonational tones, on the other hand, signal post-lexical meanings at the phrasal level and typically allow gradient pragmatic variations. Since categorical and gradient uses of pitch are ubiquitous and closely intertwined in their physiological and psychological processes, further research is warranted for a more detailed understanding of their structural and functional characterisations. This Research Topic addresses this matter from a wide range of perspectives, including first and second language acquisition, speech production and perception, structural and functional diversity, and working with distinct languages and experimental measures. In the following, we provide a short overview of the contributions submitted to this topic

    Additional information

    also published as book chapter (2023)
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Slow naming of pictures facilitates memory for their names. Psychonomic Bulletin & Review, 26(5), 1675-1682. doi:10.3758/s13423-019-01620-x.

    Abstract

    Speakers remember their own utterances better than those of their interlocutors, suggesting that language production is beneficial to memory. This may be partly explained by a generation effect: The act of generating a word is known to lead to a memory advantage (Slamecka & Graf, 1978). In earlier work, we showed a generation effect for recognition of images (Zormpa, Brehm, Hoedemaker, & Meyer, 2019). Here, we tested whether the recognition of their names would also benefit from name generation. Testing whether picture naming improves memory for words was our primary aim, as it serves to clarify whether the representations affected by generation are visual or conceptual/lexical. A secondary aim was to assess the influence of processing time on memory. Fifty-one participants named pictures in three conditions: after hearing the picture name (identity condition), backward speech, or an unrelated word. A day later, recognition memory was tested in a yes/no task. Memory in the backward speech and unrelated conditions, which required generation, was superior to memory in the identity condition, which did not require generation. The time taken by participants for naming was a good predictor of memory, such that words that took longer to be retrieved were remembered better. Importantly, that was the case only when generation was required: In the no-generation (identity) condition, processing time was not related to recognition memory performance. This work has shown that generation affects conceptual/lexical representations, making an important contribution to the understanding of the relationship between memory and language.
  • Zormpa, E., Brehm, L., Hoedemaker, R. S., & Meyer, A. S. (2019). The production effect and the generation effect improve memory in picture naming. Memory, 27(3), 340-352. doi:10.1080/09658211.2018.1510966.

    Abstract

    The production effect (better memory for words read aloud than words read silently) and the picture superiority effect (better memory for pictures than words) both improve item memory in a picture naming task (Fawcett, J. M., Quinlan, C. K., & Taylor, T. L. (2012). Interplay of the production and picture superiority effects: A signal detection analysis. Memory (Hove, England), 20(7), 655–666. doi:10.1080/09658211.2012.693510). Because picture naming requires coming up with an appropriate label, the generation effect (better memory for generated than read words) may contribute to the latter effect. In two forced-choice memory experiments, we tested the role of generation in a picture naming task on later recognition memory. In Experiment 1, participants named pictures silently or aloud with the correct name or an unreadable label superimposed. We observed a generation effect, a production effect, and an interaction between the two. In Experiment 2, unreliable labels were included to ensure full picture processing in all conditions. In this experiment, we observed a production and a generation effect but no interaction, implying the effects are dissociable. This research demonstrates the separable roles of generation and production in picture naming and their impact on memory. As such, it informs the link between memory and language production and has implications for memory asymmetries between language production and comprehension.

    Additional information

    pmem_a_1510966_sm9257.pdf
  • De Zubicaray, G., & Fisher, S. E. (Eds.). (2017). Genes, brain and language [Special Issue]. Brain and Language, 172.
  • De Zubicaray, G., & Fisher, S. E. (2017). Genes, Brain, and Language: A brief introduction to the Special Issue. Brain and Language, 172, 1-2. doi:10.1016/j.bandl.2017.08.003.
  • Zuidema, W., & Fitz, H. (2019). Key issues and future directions: Models of human language and speech processing. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 353-358). Cambridge, MA: MIT Press.
  • Zwitserlood, I. (2003). Classifying hand configurations in Nederlandse Gebarentaal (Sign Language of the Netherlands). PhD Thesis, LOT, Utrecht. Retrieved from http://igitur-archive.library.uu.nl/dissertations/2003-0717-122837/UUindex.html.

    Abstract

    This study investigates the morphological and morphosyntactic characteristics of hand configurations in signs, particularly in Nederlandse Gebarentaal (NGT). The literature on sign languages in general acknowledges that hand configurations can function as morphemes, more specifically as classifiers , in a subset of signs: verbs expressing the motion, location, and existence of referents (VELMs). These verbs are considered the output of productive sign formation processes. In contrast, other signs in which similar hand configurations appear ( iconic or motivated signs) have been considered to be lexicalized signs, not involving productive processes. This research report shows that meaningful hand configurations have (at least) two very different functions in the grammar of NGT (and presumably in other sign languages, too). First, they are agreement markers on VELMs, and hence are functional elements. Second, they are roots in motivated signs, and thus lexical elements. The latter signs are analysed as root compounds and are formed from various roots by productive processes. The similarities in surface form and differences in morphosyntactic characteristics observed in comparison of VELMs and root compounds are attributed to their different structures and to the sign language interface between grammar and phonetic form
  • Zwitserlood, I. (2009). Het Corpus NGT. Levende Talen Magazine, 6, 44-45.

    Abstract

    The Corpus NGT
  • Zwitserlood, I. (2009). Het Corpus NGT en de dagelijkse lespraktijk (1). Levende Talen Magazine, 8, 40-41.
  • Zwitserlood, I. (2003). Word formation below and above little x: Evidence from Sign Language of the Netherlands. In Proceedings of SCL 19. Nordlyd Tromsø University Working Papers on Language and Linguistics (pp. 488-502).

    Abstract

    Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT), with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM), which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.

Share this page