Publications

Displaying 1101 - 1138 of 1138
  • Willems, R. M., Hagoort, P., & Casasanto, D. (2010). Body-specific representations of action verbs: Neural evidence from right- and left-handers. Psychological Science, 21, 67-74. doi:10.1177/0956797609354072.

    Abstract

    According to theories of embodied cognition, understanding a verb like throw involves unconsciously simulating the action of throwing, using areas of the brain that support motor planning. If understanding action words involves mentally simulating one’s own actions, then the neurocognitive representation of word meanings should differ for people with different kinds of bodies, who perform actions in systematically different ways. In a test of the body-specificity hypothesis, we used functional magnetic resonance imaging to compare premotor activity correlated with action verb understanding in right- and left-handers. Righthanders preferentially activated the left premotor cortex during lexical decisions on manual-action verbs (compared with nonmanual-action verbs), whereas left-handers preferentially activated right premotor areas. This finding helps refine theories of embodied semantics, suggesting that implicit mental simulation during language processing is body specific: Right- and lefthanders, who perform actions differently, use correspondingly different areas of the brain for representing action verb meanings.
  • Willems, R. M., Peelen, M. V., & Hagoort, P. (2010). Cerebral lateralization of face-selective and body-selective visual areas depends on handedness. Cerebral Cortex, 20, 1719-1725. doi:10.1093/cercor/bhp234.

    Abstract

    The left-hemisphere dominance for language is a core example of the functional specialization of the cerebral hemispheres. The degree of left-hemisphere dominance for language depends on hand preference: Whereas the majority of right-handers show left-hemispheric language lateralization, this number is reduced in left-handers. Here, we assessed whether handedness analogously has an influence upon lateralization in the visual system. Using functional magnetic resonance imaging, we localized 4 more or less specialized extrastriate areas in left- and right-handers, namely fusiform face area (FFA), extrastriate body area (EBA), fusiform body area (FBA), and human motion area (human middle temporal [hMT]). We found that lateralization of FFA and EBA depends on handedness: These areas were right lateralized in right-handers but not in left-handers. A similar tendency was observed in FBA but not in hMT. We conclude that the relationship between handedness and hemispheric lateralization extends to functionally lateralized parts of visual cortex, indicating a general coupling between cerebral lateralization and handedness. Our findings indicate that hemispheric specialization is not fixed but can vary considerably across individuals even in areas engaged relatively early in the visual system.
  • Willems, R. M., De Boer, M., De Ruiter, J. P., Noordzij, M. L., Hagoort, P., & Toni, I. (2010). A dissociation between linguistic and communicative abilities in the human brain. Psychological Science, 21, 8-14. doi:10.1177/0956797609355563.

    Abstract

    Although language is an effective vehicle for communication, it is unclear how linguistic and communicative abilities relate to each other. Some researchers have argued that communicative message generation involves perspective taking (mentalizing), and—crucially—that mentalizing depends on language. We employed a verbal communication paradigm to directly test whether the generation of a communicative action relies on mentalizing and whether the cerebral bases of communicative message generation are distinct from parts of cortex sensitive to linguistic variables. We found that dorsomedial prefrontal cortex, a brain area consistently associated with mentalizing, was sensitive to the communicative intent of utterances, irrespective of linguistic difficulty. In contrast, left inferior frontal cortex, an area known to be involved in language, was sensitive to the linguistic demands of utterances, but not to communicative intent. These findings show that communicative and linguistic abilities rely on cerebrally (and computationally) distinct mechanisms
  • Willems, R. M., & Hagoort, P. (2007). Neural evidence for the interplay between language, gesture, and action: A review. Brain and Language, 101(3), 278-289. doi:10.1016/j.bandl.2007.03.004.

    Abstract

    Co-speech gestures embody a form of manual action that is tightly coupled to the language system. As such, the co-occurrence of speech and co-speech gestures is an excellent example of the interplay between language and action. There are, however, other ways in which language and action can be thought of as closely related. In this paper we will give an overview of studies in cognitive neuroscience that examine the neural underpinnings of links between language and action. Topics include neurocognitive studies of motor representations of speech sounds, action-related language, sign language and co-speech gestures. It will be concluded that there is strong evidence on the interaction between speech and gestures in the brain. This interaction however shares general properties with other domains in which there is interplay between language and action.
  • Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2010). Neural dissociations between action verb understanding and motor imagery. Journal of Cognitive Neuroscience, 22(10), 2387-2400. doi:10.1162/jocn.2009.21386.

    Abstract

    According to embodied theories of language, people understand a verb like throw, at least in part, by mentally simulating throwing. This implicit simulation is often assumed to be similar or identical to motor imagery. Here we used fMRI totest whether implicit simulations of actions during language understanding involve the same cortical motor regions as explicit motor imagery Healthy participants were presented with verbs related to hand actions (e.g., to throw) and nonmanual actions (e.g., to kneel). They either read these verbs (lexical decision task) or actively imagined performing the actions named by the verbs (imagery task). Primary motor cortex showd effector-specific activation during imagery, but not during lexical decision. Parts of premotor cortex distinguished manual from nonmanual actions during both lexical decision and imagery, but there was no overlap or correlation between regions activated during the two tasks. These dissociations suggest that implicit simulation and explicit imagery cued by action verbs may involve different types of motor representations and that the construct of “mental simulation” should be distinguished from “mental imagery” in embodied theories of language.
  • Willems, R. M., & Varley, R. (2010). Neural insights into the relation between language and communication. Frontiers in Human Neuroscience, 4, 203. doi:10.3389/fnhum.2010.00203.

    Abstract

    The human capacity to communicate has been hypothesized to be causally dependent upon language. Intuitively this seems plausible since most communication relies on language. Moreover, intention recognition abilities (as a necessary prerequisite for communication) and language development seem to co-develop. Here we review evidence from neuroimaging as well as from neuropsychology to evaluate the relationship between communicative and linguistic abilities. Our review indicates that communicative abilities are best considered as neurally distinct from language abilities. This conclusion is based upon evidence showing that humans rely on different cortical systems when designing a communicative message for someone else as compared to when performing core linguistic tasks, as well as upon observations of individuals with severe language loss after extensive lesions to the language system, who are still able to perform tasks involving intention understanding
  • Willems, R. M. (2007). The neural construction of a Tinkertoy [‘Journal club’ review]. The Journal of Neuroscience, 27, 1509-1510. doi:10.1523/JNEUROSCI.0005-07.2007.
  • Wilms, V., Drijvers, L., & Brouwer, S. (2022). The Effects of Iconic Gestures and Babble Language on Word Intelligibility in Sentence Context. Journal of Speech, Language, and Hearing Research, 65, 1822-1838. doi:10.1044/2022\_JSLHR-21-00387.

    Abstract

    Purpose:This study investigated to what extent iconic co-speech gestures helpword intelligibility in sentence context in two different linguistic maskers (nativevs. foreign). It was hypothesized that sentence recognition improves with thepresence of iconic co-speech gestures and with foreign compared to nativebabble.Method:Thirty-two native Dutch participants performed a Dutch word recogni-tion task in context in which they were presented with videos in which anactress uttered short Dutch sentences (e.g.,Ze begint te openen,“She starts toopen”). Participants were presented with a total of six audiovisual conditions: nobackground noise (i.e., clear condition) without gesture, no background noise withgesture, French babble without gesture, French babble with gesture, Dutch bab-ble without gesture, and Dutch babble with gesture; and they were asked to typedown what was said by the Dutch actress. The accurate identification of theaction verbs at the end of the target sentences was measured.Results:The results demonstrated that performance on the task was better inthe gesture compared to the nongesture conditions (i.e., gesture enhancementeffect). In addition, performance was better in French babble than in Dutchbabble.Conclusions:Listeners benefit from iconic co-speech gestures during commu-nication and from foreign background speech compared to native. Theseinsights into multimodal communication may be valuable to everyone whoengages in multimodal communication and especially to a public who oftenworks in public places where competing speech is present in the background.
  • Wirthlin, M., Chang, E. F., Knörnschild, M., Krubitzer, L. A., Mello, C. V., Miller, C. T., Pfenning, A. R., Vernes, S. C., Tchernichovski, O., & Yartsev, M. M. (2019). A modular approach to vocal learning: Disentangling the diversity of a complex behavioral trait. Neuron, 104(1), 87-99. doi:10.1016/j.neuron.2019.09.036.

    Abstract

    Vocal learning is a behavioral trait in which the social and acoustic environment shapes the vocal repertoire of individuals. Over the past century, the study of vocal learning has progressed at the intersection of ecology, physiology, neuroscience, molecular biology, genomics, and evolution. Yet, despite the complexity of this trait, vocal learning is frequently described as a binary trait, with species being classified as either vocal learners or vocal non-learners. As a result, studies have largely focused on a handful of species for which strong evidence for vocal learning exists. Recent studies, however, suggest a continuum in vocal learning capacity across taxa. Here, we further suggest that vocal learning is a multi-component behavioral phenotype comprised of distinct yet interconnected modules. Discretizing the vocal learning phenotype into its constituent modules would facilitate integration of findings across a wider diversity of species, taking advantage of the ways in which each excels in a particular module, or in a specific combination of features. Such comparative studies can improve understanding of the mechanisms and evolutionary origins of vocal learning. We propose an initial set of vocal learning modules supported by behavioral and neurobiological data and highlight the need for diversifying the field in order to disentangle the complexity of the vocal learning phenotype.

    Files private

    Request files
  • Witteman, M. J., & Segers, E. (2010). The modality effect tested in children in a user-paced multimedia environment. Journal of Computer Assisted Learning, 26, 132-142. doi:10.1111/j.1365-2729.2009.00335.x.

    Abstract

    The modality learning effect, according to Mayer (2001), proposes that learning is enhanced when information is presented in both the visual and auditory domain (e.g., pictures and spoken information), compared to presenting information solely in the visual channel (e.g., pictures and written text). Most of the evidence for this effect comes from adults in a laboratory setting. Therefore, we tested the modality effect with 80 children in the highest grade of elementary school, in a naturalistic setting. In a between-subjects design children either saw representational pictures with speech or representational pictures with text. Retention and transfer knowledge was tested at three moments: immediately after the intervention, one day after, and after one week. The present study did not find any evidence for a modality effect in children when the lesson is learner-paced. Instead, we found a reversed modality effect directly after the intervention for retention. A reversed modality effect was also found for the transfer questions one day later. This effect was robust, even when controlling for individual differences.
  • Wittenburg, P. (2003). The DOBES model of language documentation. Language Documentation and Description, 1, 122-139.
  • Wittenburg, P. (2010). Archiving and accessing language resources. Concurrency and Computation: Practice and Experience, 22(17), 2354-2368. doi:10.1002/cpe.1605.

    Abstract

    Languages are among the most complex systems that evolution has created. With an unforeseen speed many of these unique results of evolution are currently disappearing: every two weeks one of the 6500 still spoken languages is dying and many are subject to extreme changes due to globalization. Experts understood the need to document the languages and preserve the cultural and linguistic treasures embedded in them for future generations. Also linguistic theory will need to consider the variation of the linguistic systems encoded in languages to improve our understanding of how human minds process language material, thus accessibility to all types of resources is increasingly crucial. Deeper insights into human language processing and a higher degree of integration and interoperability between resources will also improve our language processing technology. The DOBES programme is focussing on the documentation and preservation of language material. The Max Planck Institute developed the Language Archiving Technology to help researchers when creating, archiving and accessing language resources. The recently started CLARIN research infrastructure has as main goals to achieve a broad visibility and an easy
    accessibility of language resources.
  • Wnuk, E., Verkerk, A., Levinson, S. C., & Majid, A. (2022). Color technology is not necessary for rich and efficient color language. Cognition, 229: 105223. doi:10.1016/j.cognition.2022.105223.

    Abstract

    The evolution of basic color terms in language is claimed to be stimulated by technological development, involving technological control of color or exposure to artificially colored objects. Accordingly, technologically “simple” non-industrialized societies are expected to have poor lexicalization of color, i.e., only rudimentary lexica of 2, 3 or 4 basic color terms, with unnamed gaps in the color space. While it may indeed be the case that technology stimulates lexical growth of color terms, it is sometimes considered a sine qua non for color salience and lexicalization. We provide novel evidence that this overlooks the role of the natural environment, and people's engagement with the environment, in the evolution of color vocabulary. We introduce the Maniq—nomadic hunter-gatherers with no color technology, but who have a basic color lexicon of 6 or 7 terms, thus of the same order as large languages like Vietnamese and Hausa, and who routinely talk about color. We examine color language in Maniq and compare it to available data in other languages to demonstrate it has remarkably high consensual color term usage, on a par with English, and high coding efficiency. This shows colors can matter even for non-industrialized societies, suggesting technology is not necessary for color language. Instead, factors such as perceptual prominence of color in natural environments, its practical usefulness across communicative contexts, and symbolic importance can all stimulate elaboration of color language.
  • Wolf, M. C., Muijselaar, M. M. L., Boonstra, A. M., & De Bree, E. H. (2019). The relationship between reading and listening comprehension: Shared and modality-specific components. Reading and Writing, 32(7), 1747-1767. doi:10.1007/s11145-018-9924-8.

    Abstract

    This study aimed to increase our understanding on the relationship between reading and listening comprehension. Both in comprehension theory and in educational practice, reading and listening comprehension are often seen as interchangeable, overlooking modality-specific aspects of them separately. Three questions were addressed. First, it was examined to what extent reading and listening comprehension comprise modality-specific, distinct skills or an overlapping, domain-general skill in terms of the amount of explained variance in one comprehension type by the opposite comprehension type. Second, general and modality-unique subskills of reading and listening comprehension were sought by assessing the contributions of the foundational skills word reading fluency, vocabulary, memory, attention, and inhibition to both comprehension types. Lastly, the practice of using either listening comprehension or vocabulary as a proxy of general comprehension was investigated. Reading and listening comprehension tasks with the same format were assessed in 85 second and third grade children. Analyses revealed that reading comprehension explained 34% of the variance in listening comprehension, and listening comprehension 40% of reading comprehension. Vocabulary and word reading fluency were found to be shared contributors to both reading and listening comprehension. None of the other cognitive skills contributed significantly to reading or listening comprehension. These results indicate that only part of the comprehension process is indeed domain-general and not influenced by the modality in which the information is provided. Especially vocabulary seems to play a large role in this domain-general part. The findings warrant a more prominent focus of modality-specific aspects of both reading and listening comprehension in research and education.
  • Womelsdorf, T., Schoffelen, J.-M., Oostenveld, R., Singer, W., Desimone, R., Engel, A. K., & Fries, P. (2007). Modulation of neuronal interactions through neuronal synchronization. Science, 316, 1609-1612. doi:10.1126/science.1139597.

    Abstract

    Brain processing depends on the interactions between neuronal groups. Those interactions are governed by the pattern of anatomical connections and by yet unknown mechanisms that modulate the effective strength of a given connection. We found that the mutual influence among neuronal groups depends on the phase relation between rhythmic activities within the groups. Phase relations supporting interactions between the groups preceded those interactions by a few milliseconds, consistent with a mechanistic role. These effects were specific in time, frequency, and space, and we therefore propose that the pattern of synchronization flexibly determines the pattern of neuronal interactions.
  • Xiang, H.-D., Fonteijn, H. M., Norris, D. G., & Hagoort, P. (2010). Topographical functional connectivity pattern in the perisylvian language networks. Cerebral Cortex, 20, 549-560. doi:10.1093/cercor/bhp119.

    Abstract

    We performed a resting-state functional connectivity study to investigate directly the functional correlations within the perisylvian language networks by seeding from 3 subregions of Broca's complex (pars opercularis, pars triangularis, and pars orbitalis) and their right hemisphere homologues. A clear topographical functional connectivity pattern in the left middle frontal, parietal, and temporal areas was revealed for the 3 left seeds. This is the first demonstration that a functional connectivity topology can be observed in the perisylvian language networks. The results support the assumption of the functional division for phonology, syntax, and semantics of Broca's complex as proposed by the memory, unification, and control (MUC) model and indicated a topographical functional organization in the perisylvian language networks, which suggests a possible division of labor for phonological, syntactic, and semantic function in the left frontal, parietal, and temporal areas.
  • Yang, J., Van den Bosch, A., & Frank, S. L. (2022). Unsupervised text segmentation predicts eye fixations during reading. Frontiers in Artificial Intelligence, 5: 731615. doi:10.3389/frai.2022.731615.

    Abstract

    Words typically form the basis of psycholinguistic and computational linguistic studies about sentence processing. However, recent evidence shows the basic units during reading, i.e., the items in the mental lexicon, are not always words, but could also be sub-word and supra-word units. To recognize these units, human readers require a cognitive mechanism to learn and detect them. In this paper, we assume eye fixations during reading reveal the locations of the cognitive units, and that the cognitive units are analogous with the text units discovered by unsupervised segmentation models. We predict eye fixations by model-segmented units on both English and Dutch text. The results show the model-segmented units predict eye fixations better than word units. This finding suggests that the predictive performance of model-segmented units indicates their plausibility as cognitive units. The Less-is-Better (LiB) model, which finds the units that minimize both long-term and working memory load, offers advantages both in terms of prediction score and efficiency among alternative models. Our results also suggest that modeling the least-effort principle for the management of long-term and working memory can lead to inferring cognitive units. Overall, the study supports the theory that the mental lexicon stores not only words but also smaller and larger units, suggests that fixation locations during reading depend on these units, and shows that unsupervised segmentation models can discover these units.
  • Zeller, J., Bylund, E., & Lewis, A. G. (2022). The parser consults the lexicon in spite of transparent gender marking: EEG evidence from noun class agreement processing in Zulu. Cognition, 226: 105148. doi:10.1016/j.cognition.2022.105148.

    Abstract

    In sentence comprehension, the parser in many languages has the option to use both the morphological form of a noun and its lexical representation when evaluating agreement. The additional step of consulting the lexicon incurs processing costs, and an important question is whether the parser takes that step even when the formal cues alone are sufficiently reliable to evaluate agreement. Our study addressed this question using electrophysiology in Zulu, a language where both grammatical gender and number features are reliably expressed formally by noun class prefixes, but only gender features are lexically specified. We observed reduced, more topographically focal LAN, and more frontally distributed alpha/beta power effects for gender compared to number agreement violations. These differences provide evidence that for gender mismatches, even though the formal cues are reliable, the parser nevertheless takes the additional step of consulting the noun's lexical representation, a step which is not available for number.

    Files private

    Request files
  • Zeshan, U. (2003). Aspects of Türk Işaret Dili (Turkish Sign Language). Sign Language and Linguistics, 6(1), 43-75. doi:10.1075/sll.6.1.04zes.

    Abstract

    This article provides a first overview of some striking grammatical structures in Türk Idotscedilaret Dili (Turkish Sign Language, TID), the sign language used by the Deaf community in Turkey. The data are described with a typological perspective in mind, focusing on aspects of TID grammar that are typologically unusual across sign languages. After giving an overview of the historical, sociolinguistic and educational background of TID and the language community using this sign language, five domains of TID grammar are investigated in detail. These include a movement derivation signalling completive aspect, three types of nonmanual negation — headshake, backward head tilt, and puffed cheeks — and their distribution, cliticization of the negator NOT to a preceding predicate host sign, an honorific whole-entity classifier used to refer to humans, and a question particle, its history and current status in the language. A final evaluation points out the significance of these data for sign language research and looks at perspectives for a deeper understanding of the language and its history.
  • Zhang, Q., Zhou, Y., & Lou, H. (2022). The dissociation between age of acquisition and word frequency effects in Chinese spoken picture naming. Psychological Research, 86, 1918-1929. doi:10.1007/s00426-021-01616-0.

    Abstract

    This study aimed to examine the locus of age of acquisition (AoA) and word frequency (WF) effects in Chinese spoken picture naming, using a picture–word interference task. We conducted four experiments manipulating the properties of picture names (AoA in Experiments 1 and 2, while controlling WF; and WF in Experiments 3 and 4, while controlling AoA), and the relations between distractors and targets (semantic or phonological relatedness). Both Experiments 1 and 2 demonstrated AoA effects in picture naming; pictures of early acquired concepts were named faster than those acquired later. There was an interaction between AoA and semantic relatedness, but not between AoA and phonological relatedness, suggesting localisation of AoA effects at the stage of lexical access in picture naming. Experiments 3 and 4 demonstrated WF effects: pictures of high-frequency concepts were named faster than those of low-frequency concepts. WF interacted with both phonological and semantic relatedness, suggesting localisation of WF effects at multiple levels of picture naming, including lexical access and phonological encoding. Our findings show that AoA and WF effects exist in Chinese spoken word production and may arise at related processes of lexical selection.
  • Wu, S., Zhang, D., Li, X., Zhao, J., Sun, X., Shi, L., Mao, Y., Zhang, Y., & Jiang, F. (2022). Siblings and Early Childhood Development: Evidence from a Population-Based Cohort in Preschoolers from Shanghai. International Journal of Environmental Research and Public Health, 19(9): 5739. doi:10.3390/ijerph19095739.

    Abstract

    Background: The current study aims to investigate the association between the presence of a sibling and early childhood development (ECD). (2) Methods: Data were obtained from a large-scale population-based cohort in Shanghai. Children were followed from three to six years old. Based on birth order, the sample was divided into four groups: single child, younger child, elder child, and single-elder transfer (transfer from single-child to elder-child). Psychosocial well-being and school readiness were assessed with the total difficulties score from the Strengths and Difficulties Questionnaire (SDQ) and the overall development score from the early Human Capability Index (eHCI), respectively. A multilevel model was conducted to evaluate the main effect of each sibling group and the group × age interaction effect on psychosocial well-being and school readiness. (3) Results: Across all measures, children in the younger child group presented with lower psychosocial problems (β = −0.96, 95% CI: −1.44, −0.48, p < 0.001) and higher school readiness scores (β = 1.56, 95% CI: 0.61, 2.51, p = 0.001). No significant difference, or marginally significant difference, was found between the elder group and the single-child group. Compared to the single-child group, the single-elder transfer group presented with slower development on both psychosocial well-being (Age × Group: β = 0.37, 95% CI: 0.18, 0.56, p < 0.001) and school readiness (Age × Group: β = −0.75, 95% CI: −1.10, −0.40, p < 0.001). The sibling-ECD effects did not differ between children from families of low versus high socioeconomic status. (4) Conclusion: The current study suggested the presence of a sibling was not associated with worse development outcomes in general. Rather, children with an elder sibling are more likely to present with better ECD.
  • Zhao, J., Yu, Z., Sun, X., Wu, S., Zhang, J., Zhang, D., Zhang, Y., & Jiang, F. (2022). Association between screen time trajectory and early childhood development in children in China. JAMA Pediatrics, 176(8), 768-775. doi:10.1001/jamapediatrics.2022.1630.

    Abstract

    Importance: Screen time has become an integral part of children's daily lives. Nevertheless, the developmental consequences of screen exposure in young children remain unclear.

    Objective: To investigate the screen time trajectory from 6 to 72 months of age and its association with children's development at age 72 months in a prospective birth cohort.

    Design, setting, and participants: Women in Shanghai, China, who were at 34 to 36 gestational weeks and had an expected delivery date between May 2012 and July 2013 were recruited for this cohort study. Their children were followed up at 6, 9, 12, 18, 24, 36, 48, and 72 months of age. Children's screen time was classified into 3 groups at age 6 months: continued low (ie, stable amount of screen time), late increasing (ie, sharp increase in screen time at age 36 months), and early increasing (ie, large amount of screen time in early stages that remained stable after age 36 months). Cognitive development was assessed by specially trained research staff in a research clinic. Of 262 eligible mother-offspring pairs, 152 dyads had complete data regarding all variables of interest and were included in the analyses. Data were analyzed from September 2019 to November 2021.

    Exposures: Mothers reported screen times of children at 6, 9, 12, 18, 24, 36, 48, and 72 months of age.

    Main outcomes and measures: The cognitive development of children was evaluated using the Wechsler Intelligence Scale for Children, 4th edition, at age 72 months. Social-emotional development was measured by the Strengths and Difficulties Questionnaire, which was completed by the child's mother. The study described demographic characteristics, maternal mental health, child's temperament at age 6 months, and mental development at age 12 months by subgroups clustered by a group-based trajectory model. Group difference was examined by analysis of variance.

    Results: A total of 152 mother-offspring dyads were included in this study, including 77 girls (50.7%) and 75 boys (49.3%) (mean [SD] age of the mothers was 29.7 [3.3] years). Children's screen time trajectory from age 6 to 72 months was classified into 3 groups: continued low (110 [72.4%]), late increasing (17 [11.2%]), and early increasing (25 [16.4%]). Compared with the continued low group, the late increasing group had lower scores on the Full-Scale Intelligence Quotient (β coefficient, -8.23; 95% CI, -15.16 to -1.30; P < .05) and the General Ability Index (β coefficient, -6.42; 95% CI, -13.70 to 0.86; P = .08); the early increasing group presented with lower scores on the Full-Scale Intelligence Quotient (β coefficient, -6.68; 95% CI, -12.35 to -1.02; P < .05) and the Cognitive Proficiency Index (β coefficient, -10.56; 95% CI, -17.23 to -3.90; P < .01) and a higher total difficulties score (β coefficient, 2.62; 95% CI, 0.49-4.76; P < .05).

    Conclusions and relevance: This cohort study found that excessive screen time in early years was associated with poor cognitive and social-emotional development. This finding may be helpful in encouraging awareness among parents of the importance of onset and duration of children's screen time.
  • Zheng, X., & Lemhöfer, K. (2019). The “semantic P600” in second language processing: When syntax conflicts with semantics. Neuropsychologia, 127, 131-147. doi:10.1016/j.neuropsychologia.2019.02.010.

    Abstract

    In sentences like “the mouse that chased the cat was hungry”, the syntactically correct interpretation (the mouse chases the cat) is contradicted by semantic and pragmatic knowledge. Previous research has shown that L1 speakers sometimes base sentence interpretation on this type of knowledge (so-called “shallow” or “good-enough” processing). We made use of both behavioural and ERP measurements to investigate whether L2 learners differ from native speakers in the extent to which they engage in “shallow” syntactic processing. German learners of Dutch as well as Dutch native speakers read sentences containing relative clauses (as in the example above) for which the plausible thematic roles were or were not reversed, and made plausibility judgments. The results show that behaviourally, L2 learners had more difficulties than native speakers to discriminate plausible from implausible sentences. In the ERPs, we replicated the previously reported finding of a “semantic P600” for semantic reversal anomalies in native speakers, probably reflecting the effort to resolve the syntax-semantics conflict. In L2 learners, though, this P600 was largely attenuated and surfaced only in those trials that were judged correctly for plausibility. These results generally point at a more prevalent, but not exclusive occurrence of shallow syntactic processing in L2 learners.
  • Zhernakova, A., Elbers, C. C., Ferwerda, B., Romanos, J., Trynka, G., Dubois, P. C., De Kovel, C. G. F., Franke, L., Oosting, M., Barisani, D., Bardella, M. T., Joosten, L. A. B., Saavalainen, P., van Heel, D. A., Catassi, C., Netea, M. G., Wijmenga, C., & Finnish Celiac Dis Study, G. (2010). Evolutionary and Functional Analysis of Celiac Risk Loci Reveals SH2B3 as a Protective Factor against Bacterial Infection. American Journal of Human Genetics, 86(6), 970-977. doi:10.1016/j.ajhg.2010.05.004.

    Abstract

    Celiac disease (CD) is an intolerance to dietary proteins of wheat, barley, and rye. CD may have substantial morbidity, yet it is quite common with a prevalence of 1%-2% in Western populations. It is not clear why the CD phenotype is so prevalent despite its negative effects on human health, especially because appropriate treatment in the form of a gluten-free diet has only been available since the 1950s, when dietary gluten was discovered to be the triggering factor. The high prevalence of CD might suggest that genes underlying this disease may have been favored by the process of natural selection. We assessed signatures of selection for ten confirmed CD-associated loci in several genome-wide data sets, comprising 8154 controls from four European populations and 195 individuals from a North African population, by studying haplotype lengths via the integrated haplotype score (iHS) method. Consistent signs of positive selection for CD-associated derived alleles were observed in three loci: IL12A, IL18RAP, and SH2B3. For the SH2B3 risk allele, we also show a difference in allele frequency distribution (F(st)) between HapMap phase II populations. Functional investigation of the effect of the SH2B3 genotype in response to lipopolysaccharide and muramyl dipeptide revealed that carriers of the SH2B3 rs3184504*A risk allele showed stronger activation of the NOD2 recognition pathway. This suggests that SH2B3 plays a role in protection against bacteria infection, and it provides a possible explanation for the selective sweep on SH2B3, which occurred sometime between 1200 and 1700 years ago.
  • Zhu, Z., Bastiaansen, M. C. M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52: 100855. doi:10.1016/j.jneuroling.2019.100855.

    Abstract

    Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
  • Ziegler, A., DeStefano, A. L., König, I. R., Bardel, C., Brinza, D., Bull, S., Cai, Z., Glaser, B., Jiang, W., Lee, K. E., Li, C. X., Li, J., Li, X., Majoram, P., Meng, Y., Nicodemus, K. K., Platt, A., Schwarz, D. F., Shi, W., Shugart, Y. Y. and 7 moreZiegler, A., DeStefano, A. L., König, I. R., Bardel, C., Brinza, D., Bull, S., Cai, Z., Glaser, B., Jiang, W., Lee, K. E., Li, C. X., Li, J., Li, X., Majoram, P., Meng, Y., Nicodemus, K. K., Platt, A., Schwarz, D. F., Shi, W., Shugart, Y. Y., Stassen, H. H., Sun, Y. V., Won, S., Wang, W., Wahba, G., Zagaar, U. A., & Zhao, Z. (2007). Data mining, neural nets, trees–problems 2 and 3 of Genetic Analysis Workshop 15. Genetic Epidemiology, 31(Suppl 1), S51-S60. doi:10.1002/gepi.20280.

    Abstract

    Genome-wide association studies using thousands to hundreds of thousands of single nucleotide polymorphism (SNP) markers and region-wide association studies using a dense panel of SNPs are already in use to identify disease susceptibility genes and to predict disease risk in individuals. Because these tasks become increasingly important, three different data sets were provided for the Genetic Analysis Workshop 15, thus allowing examination of various novel and existing data mining methods for both classification and identification of disease susceptibility genes, gene by gene or gene by environment interaction. The approach most often applied in this presentation group was random forests because of its simplicity, elegance, and robustness. It was used for prediction and for screening for interesting SNPs in a first step. The logistic tree with unbiased selection approach appeared to be an interesting alternative to efficiently select interesting SNPs. Machine learning, specifically ensemble methods, might be useful as pre-screening tools for large-scale association studies because they can be less prone to overfitting, can be less computer processor time intensive, can easily include pair-wise and higher-order interactions compared with standard statistical approaches and can also have a high capability for classification. However, improved implementations that are able to deal with hundreds of thousands of SNPs at a time are required.
  • Zimianiti, E. (2022). Is semantic memory the winning component in second language teaching with Accelerative Integrated Method (AIM)? LingUU Journal, 6(1), 54-62.

    Abstract

    This paper constitutes a research proposal based on Rousse-Malpalt’s
    (2019) dissertation, which extensively examines the effectiveness of the
    Accelerative Integrated Method (AIM) in second language (L2) learning.
    Although it has been found that AIM is a greatly effective method in comparison with non-implicit teaching methods, the reasons behind its success and effectiveness are yet unknown. As Semantic Memory (SM) is the component of memory responsible for the conceptualization and storage of knowledge, this paper sets to propose an investigation of its role in the learning process of AIM and provide with insights as to why the embodied experience of learning with AIM is more effective than others. The tasks proposed for administration take into account the factors of gestures being related to a learner’s memorization process and Semantic Memory. Lastly, this paper provides with a future research idea about the learning mechanisms of sign languages in people with hearing deficits and healthy population, aiming to indicate which brain mechanisms benefit from the teaching method of AIM and reveal important brain functions for SLA via AIM.
  • Zora, H., Riad, T., & Ylinen, S. (2019). Prosodically controlled derivations in the mental lexicon. Journal of Neurolinguistics, 52: 100856. doi:10.1016/j.jneuroling.2019.100856.

    Abstract

    Swedish morphemes are classified as prosodically specified or prosodically unspecified, depending on lexical or phonological stress, respectively. Here, we investigate the allomorphy of the suffix -(i)sk, which indicates the distinction between lexical and phonological stress; if attached to a lexically stressed morpheme, it takes a non-syllabic form (-sk), whereas if attached to a phonologically stressed morpheme, an epenthetic vowel is inserted (-isk). Using mismatch negativity (MMN), we explored the neural processing of this allomorphy across lexically stressed and phonologically stressed morphemes. In an oddball paradigm, participants were occasionally presented with congruent and incongruent derivations, created by the suffix -(i)sk, within the repetitive presentation of their monomorphemic stems. The results indicated that the congruent derivation of the lexically stressed stem elicited a larger MMN than the incongruent sequences of the same stem and the derivational suffix, whereas after the phonologically stressed stem a non-significant tendency towards an opposite pattern was observed. We argue that the significant MMN response to the congruent derivation in the lexical stress condition is in line with lexical MMN, indicating a holistic processing of the sequence of lexically stressed stem and derivational suffix. The enhanced MMN response to the incongruent derivation in the phonological stress condition, on the other hand, is suggested to reflect combinatorial processing of the sequence of phonologically stressed stem and derivational suffix. These findings bring a new aspect to the dual-system approach to neural processing of morphologically complex words, namely the specification of word stress.
  • Zora, H., Gussenhoven, C., Tremblay, A., & Liu, F. (2022). Editorial: Crosstalk between intonation and lexical tones: Linguistic, cognitive and neuroscience perspectives. Frontiers in Psychology, 13: 1101499. doi:10.3389/fpsyg.2022.1101499.

    Abstract

    The interplay between categorical and continuous aspects of the speech signal remains central and yet controversial in the fields of phonetics and phonology. The division between phonological abstractions and phonetic variations has been particularly relevant to the unraveling of diverse communicative functions of pitch in the domain of prosody. Pitch influences vocal communication in two major but fundamentally different ways, and lexical and intonational tones exquisitely capture these functions. Lexical tone contrasts convey lexical meanings as well as derivational meanings at the word level and are grammatically encoded as discrete structures. Intonational tones, on the other hand, signal post-lexical meanings at the phrasal level and typically allow gradient pragmatic variations. Since categorical and gradient uses of pitch are ubiquitous and closely intertwined in their physiological and psychological processes, further research is warranted for a more detailed understanding of their structural and functional characterisations. This Research Topic addresses this matter from a wide range of perspectives, including first and second language acquisition, speech production and perception, structural and functional diversity, and working with distinct languages and experimental measures. In the following, we provide a short overview of the contributions submitted to this topic

    Additional information

    also published as book chapter (2023)
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Slow naming of pictures facilitates memory for their names. Psychonomic Bulletin & Review, 26(5), 1675-1682. doi:10.3758/s13423-019-01620-x.

    Abstract

    Speakers remember their own utterances better than those of their interlocutors, suggesting that language production is beneficial to memory. This may be partly explained by a generation effect: The act of generating a word is known to lead to a memory advantage (Slamecka & Graf, 1978). In earlier work, we showed a generation effect for recognition of images (Zormpa, Brehm, Hoedemaker, & Meyer, 2019). Here, we tested whether the recognition of their names would also benefit from name generation. Testing whether picture naming improves memory for words was our primary aim, as it serves to clarify whether the representations affected by generation are visual or conceptual/lexical. A secondary aim was to assess the influence of processing time on memory. Fifty-one participants named pictures in three conditions: after hearing the picture name (identity condition), backward speech, or an unrelated word. A day later, recognition memory was tested in a yes/no task. Memory in the backward speech and unrelated conditions, which required generation, was superior to memory in the identity condition, which did not require generation. The time taken by participants for naming was a good predictor of memory, such that words that took longer to be retrieved were remembered better. Importantly, that was the case only when generation was required: In the no-generation (identity) condition, processing time was not related to recognition memory performance. This work has shown that generation affects conceptual/lexical representations, making an important contribution to the understanding of the relationship between memory and language.
  • Zormpa, E., Brehm, L., Hoedemaker, R. S., & Meyer, A. S. (2019). The production effect and the generation effect improve memory in picture naming. Memory, 27(3), 340-352. doi:10.1080/09658211.2018.1510966.

    Abstract

    The production effect (better memory for words read aloud than words read silently) and the picture superiority effect (better memory for pictures than words) both improve item memory in a picture naming task (Fawcett, J. M., Quinlan, C. K., & Taylor, T. L. (2012). Interplay of the production and picture superiority effects: A signal detection analysis. Memory (Hove, England), 20(7), 655–666. doi:10.1080/09658211.2012.693510). Because picture naming requires coming up with an appropriate label, the generation effect (better memory for generated than read words) may contribute to the latter effect. In two forced-choice memory experiments, we tested the role of generation in a picture naming task on later recognition memory. In Experiment 1, participants named pictures silently or aloud with the correct name or an unreadable label superimposed. We observed a generation effect, a production effect, and an interaction between the two. In Experiment 2, unreliable labels were included to ensure full picture processing in all conditions. In this experiment, we observed a production and a generation effect but no interaction, implying the effects are dissociable. This research demonstrates the separable roles of generation and production in picture naming and their impact on memory. As such, it informs the link between memory and language production and has implications for memory asymmetries between language production and comprehension.

    Additional information

    pmem_a_1510966_sm9257.pdf
  • Zwitserlood, I., van den Bogaerde, B., & Terpstra, A. (2010). De Nederlandse Gebarentaal en het ERK. Levende Talen Magazine, 2010(5), 50-51.
  • Zwitserlood, I. (2010). De Nederlandse Gebarentaal, het Corpus NGT en het ERK. Levende Talen Magazine, 2010(8), 44-45.
  • Zwitserlood, I. (2010). Laat je vingers spreken: NGT en vingerspelling. Levende Talen Magazine, 2010(2), 46-47.
  • Zwitserlood, I. (2010). Het Corpus NGT en de dagelijkse lespraktijk (2). Levende Talen Magazine, 2010(3), 47-48.
  • Zwitserlood, I. (2010). Sign language lexicography in the early 21st century and a recently published dictionary of Sign Language of the Netherlands. International Journal of Lexicography, 23, 443-476. doi:10.1093/ijl/ecq031.

    Abstract

    Sign language lexicography has thus far been a relatively obscure area in the world of lexicography. Therefore, this article will contain background information on signed languages and the communities in which they are used, on the lexicography of sign languages, the situation in the Netherlands as well as a review of a sign language dictionary that has recently been published in the Netherlands.
  • Zwitserlood, I., & Crasborn, O. (2010). Wat kunnen we leren uit een Corpus Nederlandse Gebarentaal? WAP Nieuwsbrief, 28(2), 16-18.
  • Zwitserlood, I. (2010). Verlos ons van de glos. Levende Talen Magazine, 2010(7), 40-41.

Share this page