Publications

Displaying 1301 - 1335 of 1335
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2009). Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language. Neuroimage, 47, 1992-2004. doi:10.1016/j.neuroimage.2009.05.066.

    Abstract

    Several studies indicate that both posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG) and left inferior frontal gyrus (LIFG) are involved in integrating information from different modalities. Here we investigated the respective roles of these two areas in integration of action and language information. We exploited the fact that the semantic relationship between language and different forms of action (i.e. co-speech gestures and pantomimes) is radically different. Speech and co-speech gestures are always produced together, and gestures are not unambiguously understood without speech. On the contrary, pantomimes are not necessarily produced together with speech and can be easily understood without speech. We presented speech together with these two types of communicative hand actions in matching or mismatching combinations to manipulate semantic integration load. Left and right pSTS/MTG were only involved in semantic integration of speech and pantomimes. Left IFG on the other hand was involved in integration of speech and co-speech gestures as well as of speech and pantomimes. Effective connectivity analyses showed that depending upon the semantic relationship between language and action, LIFG modulates activation levels in left pSTS.

    This suggests that integration in pSTS/MTG involves the matching of two input streams for which there is a relatively stable common object representation, whereas integration in LIFG is better characterized as the on-line construction of a new and unified representation of the input streams. In conclusion, pSTS/MTG and LIFG are differentially involved in multimodal integration, crucially depending upon the semantic relationship between the input streams.

    Additional information

    Supplementary table S1
  • Willems, R. M., Oostenveld, R., & Hagoort, P. (2008). Early decreases in alpha and gamma band power distinguish linguistic from visual information during spoken sentence comprehension. Brain Research, 1219, 78-90. doi:10.1016/j.brainres.2008.04.065.

    Abstract

    Language is often perceived together with visual information. This raises the question on how the brain integrates information conveyed in visual and/or linguistic format during spoken language comprehension. In this study we investigated the dynamics of semantic integration of visual and linguistic information by means of time-frequency analysis of the EEG signal. A modified version of the N400 paradigm with either a word or a picture of an object being semantically incongruous with respect to the preceding sentence context was employed. Event-Related Potential (ERP) analysis showed qualitatively similar N400 effects for integration of either word or picture. Time-frequency analysis revealed early specific decreases in alpha and gamma band power for linguistic and visual information respectively. We argue that these reflect a rapid context-based analysis of acoustic (word) or visual (picture) form information. We conclude that although full semantic integration of linguistic and visual information occurs through a common mechanism, early differences in oscillations in specific frequency bands reflect the format of the incoming information and, importantly, an early context-based detection of its congruity with respect to the preceding language context
  • Willems, R. M., & Hagoort, P. (2009). Hand preference influences neural correlates of action observation. Brain Research, 1269, 90-104. doi:10.1016/j.brainres.2009.02.057.

    Abstract

    It has been argued that we map observed actions onto our own motor system. Here we added to this issue by investigating whether hand preference influences the neural correlates of action observation of simple, essentially meaningless hand actions. Such an influence would argue for an intricate neural coupling between action production and action observation, which goes beyond effects of motor repertoire or explicit motor training, as has been suggested before. Indeed, parts of the human motor system exhibited a close coupling between action production and action observation. Ventral premotor and inferior and superior parietal cortices showed differential activation for left- and right-handers that was similar during action production as well as during action observation. This suggests that mapping observed actions onto the observer's own motor system is a core feature of action observation - at least for actions that do not have a clear goal or meaning. Basic differences in the way we act upon the world are not only reflected in neural correlates of action production, but can also influence the brain basis of action observation.
  • Willems, R. M., Van der Haegen, L., Fisher, S. E., & Francks, C. (2014). On the other hand: Including left-handers in cognitive neuroscience and neurogenetics. Nature Reviews Neuroscience, 15, 193-201. doi:10.1038/nrn3679.

    Abstract

    Left-handers are often excluded from study cohorts in neuroscience and neurogenetics in order to reduce variance in the data. However, recent investigations have shown that the inclusion or targeted recruitment of left-handers can be informative in studies on a range of topics, such as cerebral lateralization and the genetic underpinning of asymmetrical brain development. Left-handed individuals represent a substantial portion of the human population and therefore left-handedness falls within the normal range of human diversity; thus, it is important to account for this variation in our understanding of brain functioning. We call for neuroscientists and neurogeneticists to recognize the potential of studying this often-discarded group of research subjects.
  • Willems, R. M., & Francks, C. (2014). Your left-handed brain. Frontiers for Young Minds, 2: 13. doi:10.3389/frym.2014.00013.

    Abstract

    While most people prefer to use their right hand to brush their teeth, throw a ball, or hold a tennis racket, left-handers prefer to use their left hand. This is the case for around 10 per cent of all people. There was a time (not so long ago) when left-handers were stigmatized in Western (and other) communities: it was considered a bad sign if you were left-handed, and left-handed children were often forced to write with their right hand. This is nonsensical: there is nothing wrong with being left-handed, and trying to write with the non-preferred hand is frustrating for almost everybody. As a matter of fact, science can learn from left-handers, and in this paper, we discuss how this may be the case. We review why some people are left-handed and others are not, how left-handers' brains differ from right-handers’, and why scientists study left-handedness in the first place
  • Williams, N. M., Williams, H., Majounie, E., Norton, N., Glaser, B., Morris, H. R., Owen, M. J., & O'Donovan, M. C. (2008). Analysis of copy number variation using quantitative interspecies competitive PCR. Nucleic Acids Research, 36(17): e112. doi:10.1093/nar/gkn495.

    Abstract

    Over recent years small submicroscopic DNA copy-number variants (CNVs) have been highlighted as an important source of variation in the human genome, human phenotypic diversity and disease susceptibility. Consequently, there is a pressing need for the development of methods that allow the efficient, accurate and cheap measurement of genomic copy number polymorphisms in clinical cohorts. We have developed a simple competitive PCR based method to determine DNA copy number which uses the entire genome of a single chimpanzee as a competitor thus eliminating the requirement for competitive sequences to be synthesized for each assay. This results in the requirement for only a single reference sample for all assays and dramatically increases the potential for large numbers of loci to be analysed in multiplex. In this study we establish proof of concept by accurately detecting previously characterized mutations at the PARK2 locus and then demonstrating the potential of quantitative interspecies competitive PCR (qicPCR) to accurately genotype CNVs in association studies by analysing chromosome 22q11 deletions in a sample of previously characterized patients and normal controls.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2013). Foreign accent strength and listener familiarity with an accent co-determine speed of perceptual adaptation. Attention, Perception & Psychophysics, 75, 537-556. doi:10.3758/s13414-012-0404-y.

    Abstract

    We investigated how the strength of a foreign accent and varying types of experience with foreign-accented speech influence the recognition of accented words. In Experiment 1, native Dutch listeners with limited or extensive prior experience with German-accented Dutch completed a cross-modal priming experiment with strongly, medium, and weakly accented words. Participants with limited experience were primed by the medium and weakly accented words, but not by the strongly accented words. Participants with extensive experience were primed by all accent types. In Experiments 2 and 3, Dutch listeners with limited experience listened to a short story before doing the cross-modal priming task. In Experiment 2, the story was spoken by the priming task speaker and either contained strongly accented words or did not. Strongly accented exposure led to immediate priming by novel strongly accented words, while exposure to the speaker without strongly accented tokens led to priming only in the experiment’s second half. In Experiment 3, listeners listened to the story with strongly accented words spoken by a different German-accented speaker. Listeners were primed by the strongly accented words, but again only in the experiment’s second half. Together, these results show that adaptation to foreign-accented speech is rapid but depends on accent strength and on listener familiarity with those strongly accented words.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2014). Tolerance for inconsistency in foreign-accented speech. Psychonomic Bulletin & Review, 21, 512-519. doi:10.3758/s13423-013-0519-8.

    Abstract

    Are listeners able to adapt to a foreign-accented speaker who has, as is often the case, an inconsistent accent? Two groups of native Dutch listeners participated in a cross-modal priming experiment, either in a consistent-accent condition (German-accented items only) or in an inconsistent-accent condition (German-accented and nativelike pronunciations intermixed). The experimental words were identical for both groups (words with vowel substitutions characteristic of German-accented speech); additional contextual words differed in accentedness (German-accented or nativelike words). All items were spoken by the same speaker: a German native who could produce the accented forms but could also pass for a Dutch native speaker. Listeners in the consistent-accent group were able to adapt quickly to the speaker (i.e., showed facilitatory priming for words with vocalic substitutions). Listeners in the inconsistent-accent condition showed adaptation to words with vocalic substitutions only in the second half of the experiment. These results indicate that adaptation to foreign-accented speech is rapid. Accent inconsistency slows listeners down initially, but a short period of additional exposure is enough for them to adapt to the speaker. Listeners can therefore tolerate inconsistency in foreign-accented speech.
  • Wittenburg, P., Skiba, R., & Trilsbeek, P. (2004). Technology and Tools for Language Documentation. Language Archive Newsletter, 1(4), 3-4.
  • Wittenburg, P., Skiba, R., & Trilsbeek, P. (2005). The language archive at the MPI: Contents, tools, and technologies. Language Archives Newsletter, 5, 7-9.
  • Wittenburg, P. (2004). Training Course in Lithuania. Language Archive Newsletter, 1(2), 6-6.
  • Wittenburg, P. (2008). Die CLARIN Forschungsinfrastruktur. ÖGAI-journal (Österreichische Gesellschaft für Artificial Intelligence), 27, 10-17.
  • Wittenburg, P., Dirksmeyer, R., Brugman, H., & Klaas, G. (2004). Digital formats for images, audio and video. Language Archive Newsletter, 1(1), 3-6.
  • Wittenburg, P. (2004). International Expert Meeting on Access Management for Distributed Language Archives. Language Archive Newsletter, 1(3), 12-12.
  • Wittenburg, P. (2004). Final review of INTERA. Language Archive Newsletter, 1(4), 11-12.
  • Wittenburg, P. (2004). LinguaPax Forum on Language Diversity, Sustainability, and Peace. Language Archive Newsletter, 1(3), 13-13.
  • Wittenburg, P. (2004). LREC conference 2004. Language Archive Newsletter, 1(3), 12-13.
  • Wittenburg, P. (2004). News from the Archive of the Max Planck Institute for Psycholinguistics. Language Archive Newsletter, 1(4), 12-12.
  • Wnuk, E., & Burenhult, N. (2014). Contact and isolation in hunter-gatherer language dynamics: Evidence from Maniq phonology (Aslian, Malay Peninsula). Studies in Language, 38(4), 956-981. doi:10.1075/sl.38.4.06wnu.
  • Wnuk, E., & Majid, A. (2014). Revisiting the limits of language: The odor lexicon of Maniq. Cognition, 131, 125-138. doi:10.1016/j.cognition.2013.12.008.

    Abstract

    It is widely believed that human languages cannot encode odors. While this is true for English,
    and other related languages, data from some non-Western languages challenge this
    view. Maniq, a language spoken by a small population of nomadic hunter–gatherers in
    southern Thailand, is such a language. It has a lexicon of over a dozen terms dedicated
    to smell. We examined the semantics of these smell terms in 3 experiments (exemplar
    listing, similarity judgment and off-line rating). The exemplar listing task confirmed that
    Maniq smell terms have complex meanings encoding smell qualities. Analyses of the
    similarity data revealed that the odor lexicon is coherently structured by two dimensions.
    The underlying dimensions are pleasantness and dangerousness, as verified by the off-line
    rating study. Ethnographic data illustrate that smell terms have detailed semantics tapping
    into broader cultural constructs. Contrary to the widespread view that languages cannot
    encode odors, the Maniq data show odor can be a coherent semantic domain, thus shedding
    new light on the limits of language.
  • Wolters, G., & Poletiek, F. H. (2008). Beslissen over aangiftes van seksueel misbruik bij kinderen. De Psycholoog, 43, 29-29.
  • Wright, S. E., & Windhouwer, M. (2013). ISOcat - im Reich der Datenkategorien. eDITion: Fachzeitschrift für Terminologie, 9(1), 8-12.

    Abstract

    Im ISOcat-Datenkategorie-Register (Data Category Registry, www.isocat.org) des Technischen Komitees ISO/TC 37 (Terminology and other language and content resources) werden Feldnamen und Werte für Sprachressourcen beschrieben. Empfohlene Feldnamen und zuverlässige Definitionen sollen dazu beitragen, dass Sprachdaten unabhängig von Anwendungen, Plattformen und Communities of Practice (CoP) wiederverwendet werden können. Datenkategorie-Gruppen (Data Category Selections) können eingesehen, ausgedruckt, exportiert und nach kostenloser Registrierung auch neu erstellt werden.
  • Li, X., Yang, Y., & Hagoort, P. (2008). Pitch accent and lexical tone processing in Chinese discourse comprehension: An ERP study. Brain Research, 1222, 192-200. doi:10.1016/j.brainres.2008.05.031.

    Abstract

    In the present study, event-related brain potentials (ERP) were recorded to investigate the role of pitch accent and lexical tone in spoken discourse comprehension. Chinese was used as material to explore the potential difference in the nature and time course of brain responses to sentence meaning as indicated by pitch accent and to lexical meaning as indicated by tone. In both cases, the pitch contour of critical words was varied. The results showed that both inconsistent pitch accent and inconsistent lexical tone yielded N400 effects, and there was no interaction between them. The negativity evoked by inconsistent pitch accent had the some topography as that evoked by inconsistent lexical tone violation, with a maximum over central–parietal electrodes. Furthermore, the effect for the combined violations was the sum of effects for pure pitch accent and pure lexical tone violation. However, the effect for the lexical tone violation appeared approximately 90 ms earlier than the effect of the pitch accent violation. It is suggested that there might be a correspondence between the neural mechanism underlying pitch accent and lexical meaning processing in context. They both reflect the integration of the current information into a discourse context, independent of whether the current information was sentence meaning indicated by accentuation, or lexical meaning indicated by tone. In addition, lexical meaning was processed earlier than sentence meaning conveyed by pitch accent during spoken language processing.
  • Yang, Y., Dai, B., Howell, P., Wang, X., Li, K., & Lu, C. (2014). White and Grey Matter Changes in the Language Network during Healthy Aging. PLoS One, 9(9): e108077. doi: 10.1371/journal.pone.0108077.

    Abstract

    Neural structures change with age but there is no consensus on the exact processes involved. This study tested the hypothesis that white and grey matter in the language network changes during aging according to a “last in, first out” process. The fractional anisotropy (FA) of white matter and cortical thickness of grey matter were measured in 36 participants whose ages ranged from 55 to 79 years. Within the language network, the dorsal pathway connecting the mid-to-posterior superior temporal cortex (STC) and the inferior frontal cortex (IFC) was affected more by aging in both FA and thickness than the other dorsal pathway connecting the STC with the premotor cortex and the ventral pathway connecting the mid-to-anterior STC with the ventral IFC. These results were independently validated in a second group of 20 participants whose ages ranged from 50 to 73 years. The pathway that is most affected during aging matures later than the other two pathways (which are present at birth). The results are interpreted as showing that the neural structures which mature later are affected more than those that mature earlier, supporting the “last in, first out” theory.
  • Zeshan, U., Escobedo Delgado, C. E., Dikyuva, H., Panda, S., & De Vos, C. (2013). Cardinal numerals in rural sign languages: Approaching cross-modal typology. Linguistic Typology, 17(3), 357-396. doi:10.1515/lity-2013-0019.

    Abstract

    This article presents data on cardinal numerals in three sign languages from small-scale communities with hereditary deafness. The unusual features found in these data considerably extend the known range of typological variety across sign languages. Some features, such as non-decimal numeral bases, are unattested in sign languages, but familiar from spoken languages, while others, such as subtractive sub-systems, are rare in sign and speech. We conclude that for a complete typological appraisal of a domain, an approach to cross-modal typology, which includes a typologically diverse range of sign languages in addition to spoken languages, is both instructive and feasible.
  • Zeshan, U. (2004). Interrogative constructions in sign languages - Cross-linguistic perspectives. Language, 80(1), 7-39.

    Abstract

    This article reports on results from a broad crosslinguistic study based on data from thirty-five signed languages around the world. The study is the first of its kind, and the typological generalizations presented here cover the domain of interrogative structures as they appear across a wide range of geographically and genetically distinct signed languages. Manual and nonmanual ways of marking basic types of questions in signed languages are investigated. As a result, it becomes clear that the range of crosslinguistic variation is extensive for some subparameters, such as the structure of question-word paradigms, while other parameters, such as the use of nonmanual expressions in questions, show more similarities across signed languages. Finally, it is instructive to compare the findings from signed language typology to relevant data from spoken languages at a more abstract, crossmodality level.
  • Zeshan, U., Vasishta, M. N., & Sethna, M. (2005). Implementation of Indian Sign Language in educational settings. Asia Pacific Disability Rehabilitation Journal, 16(1), 16-40.

    Abstract

    This article reports on several sub-projects of research and development related to the use of Indian Sign Language in educational settings. In many countries around the world, sign languages are now recognised as the legitimate, full-fledged languages of the deaf communities that use them. In India, the development of sign language resources and their application in educational contexts, is still in its initial stages. The work reported on here, is the first principled and comprehensive effort of establishing educational programmes in Indian Sign Language at a national level. Programmes are of several types: a) Indian Sign Language instruction for hearing people; b) sign language teacher training programmes for deaf people; and c) educational materials for use in schools for the Deaf. The conceptual approach used in the programmes for deaf students is known as bilingual education, which emphasises the acquisition of a first language, Indian Sign Language, alongside the acquisition of spoken languages, primarily in their written form.
  • Zeshan, U. (2004). Hand, head and face - negative constructions in sign languages. Linguistic Typology, 8(1), 1-58. doi:10.1515/lity.2004.003.

    Abstract

    This article presents a typology of negative constructions across a substantial number of sign languages from around the globe. After situating the topic within the wider context of linguistic typology, the main negation strategies found across sign languages are described. Nonmanual negation includes the use of head movements and facial expressions for negation and is of great importance in sign languages as well as particularly interesting from a typological point of view. As far as manual signs are concerned, independent negative particles represent the dominant strategy, but there are also instances of irregular negation in most sign languages. Irregular negatives may take the form of suppletion, cliticisation, affixing, or internal modification of a sign. The results of the study lead to interesting generalisations about similarities and differences between negatives in signed and spoken languages.
  • Zhang, J., Bao, S., Furumai, R., Kucera, K. S., Ali, A., Dean, N. M., & Wang, X.-F. (2005). Protein phosphatase 5 is required for ATR-mediated checkpoint activation. Molecular and Cellular Biology, 25, 9910-9919. doi:10.1128/​MCB.25.22.9910-9919.2005.

    Abstract

    In response to DNA damage or replication stress, the protein kinase ATR is activated and subsequently transduces genotoxic signals to cell cycle control and DNA repair machinery through phosphorylation of a number of downstream substrates. Very little is known about the molecular mechanism by which ATR is activated in response to genotoxic insults. In this report, we demonstrate that protein phosphatase 5 (PP5) is required for the ATR-mediated checkpoint activation. PP5 forms a complex with ATR in a genotoxic stress-inducible manner. Interference with the expression or the activity of PP5 leads to impairment of the ATR-mediated phosphorylation of hRad17 and Chk1 after UV or hydroxyurea treatment. Similar results are obtained in ATM-deficient cells, suggesting that the observed defect in checkpoint signaling is the consequence of impaired functional interaction between ATR and PP5. In cells exposed to UV irradiation, PP5 is required to elicit an appropriate S-phase checkpoint response. In addition, loss of PP5 leads to premature mitosis after hydroxyurea treatment. Interestingly, reduced PP5 activity exerts differential effects on the formation of intranuclear foci by ATR and replication protein A, implicating a functional role for PP5 in a specific stage of the checkpoint signaling pathway. Taken together, our results suggest that PP5 plays a critical role in the ATR-mediated checkpoint activation.
  • De Zubicaray, G. I., Acheson, D. J., & Hartsuiker, R. J. (Eds.). (2013). Mind what you say - general and specific mechanisms for monitoring in speech production [Research topic] [Special Issue]. Frontiers in Human Neuroscience. Retrieved from http://www.frontiersin.org/human_neuroscience/researchtopics/mind_what_you_say_-_general_an/1197.

    Abstract

    Psycholinguistic research has typically portrayed speech production as a relatively automatic process. This is because when errors are made, they occur as seldom as one in every thousand words we utter. However, it has long been recognised that we need some form of control over what we are currently saying and what we plan to say. This capacity to both monitor our inner speech and self-correct our speech output has often been assumed to be a property of the language comprehension system. More recently, it has been demonstrated that speech production benefits from interfacing with more general cognitive processes such as selective attention, short-term memory (STM) and online response monitoring to resolve potential conflict and successfully produce the output of a verbal plan. The conditions and levels of representation according to which these more general planning, monitoring and control processes are engaged during speech production remain poorly understood. Moreover, there remains a paucity of information about their neural substrates, despite some of the first evidence of more general monitoring having come from electrophysiological studies of error related negativities (ERNs). While aphasic speech errors continue to be a rich source of information, there has been comparatively little research focus on instances of speech repair. The purpose of this Frontiers Research Topic is to provide a forum for researchers to contribute investigations employing behavioural, neuropsychological, electrophysiological, neuroimaging and virtual lesioning techniques. In addition, while the focus of the research topic is on novel findings, we welcome submission of computational simulations, review articles and methods papers.
  • De Zubicaray, G. I., Hartsuiker, R. J., & Acheson, D. J. (2014). Mind what you say—general and specific mechanisms for monitoring in speech production. Frontiers in Human Neuroscience, 8: 514. doi:10.3389%2Ffnhum.2014.00514.

    Abstract

    For most people, speech production is relatively effortless and error-free. Yet it has long been recognized that we need some type of control over what we are currently saying and what we plan to say. Precisely how we monitor our internal and external speech has been a topic of research interest for several decades. The predominant approach in psycholinguistics has assumed monitoring of both is accomplished via systems responsible for comprehending others' speech.

    This special topic aimed to broaden the field, firstly by examining proposals that speech production might also engage more general systems, such as those involved in action monitoring. A second aim was to examine proposals for a production-specific, internal monitor. Both aims require that we also specify the nature of the representations subject to monitoring.
  • Zumer, J. M., Scheeringa, R., Schoffelen, J.-M., Norris, D. G., & Jensen, O. (2014). Occipital alpha activity during stimulus processing gates the information flow to object-selective cortex. PLoS Biology, 12(10): e1001965. doi:10.1371/journal.pbio.1001965.

    Abstract

    Given the limited processing capabilities of the sensory system, it is essential that attended information is gated to downstream areas, whereas unattended information is blocked. While it has been proposed that alpha band (8–13 Hz) activity serves to route information to downstream regions by inhibiting neuronal processing in task-irrelevant regions, this hypothesis remains untested. Here we investigate how neuronal oscillations detected by electroencephalography in visual areas during working memory encoding serve to gate information reflected in the simultaneously recorded blood-oxygenation-level-dependent (BOLD) signals recorded by functional magnetic resonance imaging in downstream ventral regions. We used a paradigm in which 16 participants were presented with faces and landscapes in the right and left hemifields; one hemifield was attended and the other unattended. We observed that decreased alpha power contralateral to the attended object predicted the BOLD signal representing the attended object in ventral object-selective regions. Furthermore, increased alpha power ipsilateral to the attended object predicted a decrease in the BOLD signal representing the unattended object. We also found that the BOLD signal in the dorsal attention network inversely correlated with visual alpha power. This is the first demonstration, to our knowledge, that oscillations in the alpha band are implicated in the gating of information from the visual cortex to the ventral stream, as reflected in the representationally specific BOLD signal. This link of sensory alpha to downstream activity provides a neurophysiological substrate for the mechanism of selective attention during stimulus processing, which not only boosts the attended information but also suppresses distraction. Although previous studies have shown a relation between the BOLD signal from the dorsal attention network and the alpha band at rest, we demonstrate such a relation during a visuospatial task, indicating that the dorsal attention network exercises top-down control of visual alpha activity.
  • Zwitserlood, I. (2008). Grammatica-vertaalmethode en nederlandse gebarentaal. Levende Talen Magazine, 95(5), 28-29.
  • Zwitserlood, I. (2009). Het Corpus NGT. Levende Talen Magazine, 6, 44-45.

    Abstract

    The Corpus NGT
  • Zwitserlood, I. (2009). Het Corpus NGT en de dagelijkse lespraktijk (1). Levende Talen Magazine, 8, 40-41.

Share this page