Publications

Displaying 1301 - 1354 of 1354
  • Wegman, J., Fonteijn, H. M., van Ekert, J., Tyborowska, A., Jansen, C., & Janzen, G. (2014). Gray and white matter correlates of navigational ability in humans. Human Brain Mapping, 35(6), 2561-2572. doi:10.1002/hbm.22349.

    Abstract

    Humans differ widely in their navigational abilities. Studies have shown that self-reports on navigational abilities are good predictors of performance on navigation tasks in real and virtual environments. The caudate nucleus and medial temporal lobe regions have been suggested to subserve different navigational strategies. The ability to use different strategies might underlie navigational ability differences. This study examines the anatomical correlates of self-reported navigational ability in both gray and white matter. Local gray matter volume was compared between a group (N = 134) of good and bad navigators using voxel-based morphometry (VBM), as well as regional volumes. To compare between good and bad navigators, we also measured white matter anatomy using diffusion tensor imaging (DTI) and looked at fractional anisotropy (FA) values. We observed a trend toward higher local GM volume in right anterior parahippocampal/rhinal cortex for good versus bad navigators. Good male navigators showed significantly higher local GM volume in right hippocampus than bad male navigators. Conversely, bad navigators showed increased FA values in the internal capsule, the white matter bundle closest to the caudate nucleus and a trend toward higher local GM volume in the caudate nucleus. Furthermore, caudate nucleus regional volume correlated negatively with navigational ability. These convergent findings across imaging modalities are in line with findings showing that the caudate nucleus and the medial temporal lobes are involved in different wayfinding strategies. Our study is the first to show a link between self-reported large-scale navigational abilities and different measures of brain anatomy.
  • Weissenborn, J. (1986). Learning how to become an interlocutor. The verbal negotiation of common frames of reference and actions in dyads of 7–14 year old children. In J. Cook-Gumperz, W. A. Corsaro, & J. Streeck (Eds.), Children's worlds and children's language (pp. 377-404). Berlin: Mouton de Gruyter.
  • Weissenborn, J. (1988). Von der demonstratio ad oculos zur Deixis am Phantasma. Die Entwicklung der lokalen Referenz bei Kindern. In Karl Bühler's Theory of Language. Proceedings of the Conference held at Kirchberg, August 26, 1984 and Essen, November 21–24, 1984 (pp. 257-276). Amsterdam: Benjamins.
  • Whitmarsh, S., Barendregt, H., Schoffelen, J.-M., & Jensen, O. (2014). Metacognitive awareness of covert somatosensory attention corresponds to contralateral alpha power. NeuroImage, 85(2), 803-809. doi:10.1016/j.neuroimage.2013.07.031.

    Abstract

    Studies on metacognition have shown that participants can report on their performance on a wide range of perceptual, memory and behavioral tasks. We know little, however, about the ability to report on one's attentional focus. The degree and direction of somatosensory attention can, however, be readily discerned through suppression of alpha band frequencies in EEG/MEG produced by the somatosensory cortex. Such top-down attentional modulations of cortical excitability have been shown to result in better discrimination performance and decreased response times. In this study we asked whether the degree of attentional focus is also accessible for subjective report, and whether such evaluations correspond to the amount of somatosensory alpha activity. In response to auditory cues participants maintained somatosensory attention to either their left or right hand for intervals varying randomly between 5 and 32seconds, while their brain activity was recorded with MEG. Trials were terminated by a probe sound, to which they reported their level of attention on the cued hand right before probe-onset. Using a beamformer approach, we quantified the alpha activity in left and right somatosensory regions, one second before the probe. Alpha activity from contra- and ipsilateral somatosensory cortices for high versus low attention trials were compared. As predicted, the contralateral somatosensory alpha depression correlated with higher reported attentional focus. Finally, alpha activity two to three seconds before the probe-onset was correlated with attentional focus. We conclude that somatosensory attention is indeed accessible to metacognitive awareness.
  • Widlok, T. (2004). Ethnography in language Documentation. Language Archive Newsletter, 1(3), 4-6.
  • Widlok, T., & Burenhult, N. (2014). Sehen, riechen, orientieren. Spektrum der Wissenschaft, June 2014, 76-81.
  • Wilkins, D. (1999). A questionnaire on motion lexicalisation and motion description. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 96-115). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002706.

    Abstract

    How do languages express ideas of movement, and how do they package features that can be part of motion, such as path and cause? This questionnaire is used to gain a picture of the lexical resources a language draws on for motion expressions. It targets issues of semantic conflation (i.e., what other semantic information besides motion may be encoded in a verb root) and patterns of semantic distribution (i.e., what types of information are encoded in the morphemes that come together to build a description of a motion event). It was originally designed for Australian languages, but has since been used around the world.
  • Wilkins, D. (1999). Eliciting contrastive use of demonstratives for objects within close personal space (all objects well within arm’s reach). In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 25-28). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2573796.

    Abstract

    Contrastive reference, where a speaker presents or identifies one item in explicit contrast to another (I like this book but that one is boring), has special communicative and information structure properties. This can be reflected in rules of demonstrative use. For example, in some languages, terms equivalent to this and that can be used for contrastive reference in almost any spatial context. But other two-term languages stick more closely to “distance rules” for demonstratives, allowing a this-like term in close space only. This task elicits data concerning one context of contrastive reference, focusing on whether (and how) non-proximal demonstratives can be used to distinguish objects within a proximal area. The task runs like a memory game, with the consultant being asked to identify the locations of two or three hidden items arranged within arm’s reach.
  • Wilkins, D. (1999). The 1999 demonstrative questionnaire: “This” and “that” in comparative perspective. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 1-24). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2573775.

    Abstract

    Demonstrative terms (e.g., this and that) are key to understanding how a language constructs and interprets spatial relationships. They are tricky to pin down, typically having functions that do not match “idealized” uses, and that can become invisible in narrow elicitation settings. This questionnaire is designed to identify the range(s) of use of certain spatial demonstrative terms, and help assess the roles played by gesture, access, attention, and addressee knowledge in demonstrative use. The stimuli consist of 25 diagrammed “elicitation settings” to be created by the researcher.
  • Willems, R. M., Van der Haegen, L., Fisher, S. E., & Francks, C. (2014). On the other hand: Including left-handers in cognitive neuroscience and neurogenetics. Nature Reviews Neuroscience, 15, 193-201. doi:10.1038/nrn3679.

    Abstract

    Left-handers are often excluded from study cohorts in neuroscience and neurogenetics in order to reduce variance in the data. However, recent investigations have shown that the inclusion or targeted recruitment of left-handers can be informative in studies on a range of topics, such as cerebral lateralization and the genetic underpinning of asymmetrical brain development. Left-handed individuals represent a substantial portion of the human population and therefore left-handedness falls within the normal range of human diversity; thus, it is important to account for this variation in our understanding of brain functioning. We call for neuroscientists and neurogeneticists to recognize the potential of studying this often-discarded group of research subjects.
  • Willems, R. M., & Francks, C. (2014). Your left-handed brain. Frontiers for Young Minds, 2: 13. doi:10.3389/frym.2014.00013.

    Abstract

    While most people prefer to use their right hand to brush their teeth, throw a ball, or hold a tennis racket, left-handers prefer to use their left hand. This is the case for around 10 per cent of all people. There was a time (not so long ago) when left-handers were stigmatized in Western (and other) communities: it was considered a bad sign if you were left-handed, and left-handed children were often forced to write with their right hand. This is nonsensical: there is nothing wrong with being left-handed, and trying to write with the non-preferred hand is frustrating for almost everybody. As a matter of fact, science can learn from left-handers, and in this paper, we discuss how this may be the case. We review why some people are left-handed and others are not, how left-handers' brains differ from right-handers’, and why scientists study left-handedness in the first place
  • Wilson, J. J., & Little, H. (2014). Emerging languages in Esoteric and Exoteric Niches: evidence from Rural Sign Languages. In Ways to Potolanguage 3 book of abstracts (pp. 54-55).
  • Windhouwer, M., Petro, J., & Shayan, S. (2014). RELISH LMF: Unlocking the full power of the lexical markup framework. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 1032-1037).
  • Wirthlin, M., Chang, E. F., Knörnschild, M., Krubitzer, L. A., Mello, C. V., Miller, C. T., Pfenning, A. R., Vernes, S. C., Tchernichovski, O., & Yartsev, M. M. (2019). A modular approach to vocal learning: Disentangling the diversity of a complex behavioral trait. Neuron, 104(1), 87-99. doi:10.1016/j.neuron.2019.09.036.

    Abstract

    Vocal learning is a behavioral trait in which the social and acoustic environment shapes the vocal repertoire of individuals. Over the past century, the study of vocal learning has progressed at the intersection of ecology, physiology, neuroscience, molecular biology, genomics, and evolution. Yet, despite the complexity of this trait, vocal learning is frequently described as a binary trait, with species being classified as either vocal learners or vocal non-learners. As a result, studies have largely focused on a handful of species for which strong evidence for vocal learning exists. Recent studies, however, suggest a continuum in vocal learning capacity across taxa. Here, we further suggest that vocal learning is a multi-component behavioral phenotype comprised of distinct yet interconnected modules. Discretizing the vocal learning phenotype into its constituent modules would facilitate integration of findings across a wider diversity of species, taking advantage of the ways in which each excels in a particular module, or in a specific combination of features. Such comparative studies can improve understanding of the mechanisms and evolutionary origins of vocal learning. We propose an initial set of vocal learning modules supported by behavioral and neurobiological data and highlight the need for diversifying the field in order to disentangle the complexity of the vocal learning phenotype.

    Files private

    Request files
  • Wittek, A. (1999). Zustandsveränderungsverben im Deutschen - wie lernt das Kind die komplexe Semantik? In J. Meibauer, & M. Rothweiler (Eds.), Das Lexikon im Spracherwerb (pp. 278-296). Tübingen: Francke.

    Abstract

    Angelika Wittek untersuchte Zustandsveränderungsverben bei vier- bis sechsjährigen Kindern. Englischsprechende Kinder verstehen bis zum Alter von 8 Jahren diese Verben als Bewegungsverben und ignorieren, daß sie zusätzlich die Information über einen Endzustand im Sinne der Negation des Ausgangszustands beeinhalten. Wittek zeigte, daß entgegen der Erwartung transparente, morphologisch komplexe Formen (wachmachen), in denen die Partikel den Endzustand explizit macht, nicht besser verstanden werden als Simplizia (wecken). Zudem diskutierte sie, inwieweit die Verwendung des Adverbs wieder in restitutiver Lesart Hinweise auf den Erwerb dieser Verben geben kann.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2014). Tolerance for inconsistency in foreign-accented speech. Psychonomic Bulletin & Review, 21, 512-519. doi:10.3758/s13423-013-0519-8.

    Abstract

    Are listeners able to adapt to a foreign-accented speaker who has, as is often the case, an inconsistent accent? Two groups of native Dutch listeners participated in a cross-modal priming experiment, either in a consistent-accent condition (German-accented items only) or in an inconsistent-accent condition (German-accented and nativelike pronunciations intermixed). The experimental words were identical for both groups (words with vowel substitutions characteristic of German-accented speech); additional contextual words differed in accentedness (German-accented or nativelike words). All items were spoken by the same speaker: a German native who could produce the accented forms but could also pass for a Dutch native speaker. Listeners in the consistent-accent group were able to adapt quickly to the speaker (i.e., showed facilitatory priming for words with vocalic substitutions). Listeners in the inconsistent-accent condition showed adaptation to words with vocalic substitutions only in the second half of the experiment. These results indicate that adaptation to foreign-accented speech is rapid. Accent inconsistency slows listeners down initially, but a short period of additional exposure is enough for them to adapt to the speaker. Listeners can therefore tolerate inconsistency in foreign-accented speech.
  • Wittenburg, P., Skiba, R., & Trilsbeek, P. (2004). Technology and Tools for Language Documentation. Language Archive Newsletter, 1(4), 3-4.
  • Wittenburg, P. (2004). The IMDI metadata concept. In S. F. Ferreira (Ed.), Workingmaterial on Building the LR&E Roadmap: Joint COCOSDA and ICCWLRE Meeting, (LREC2004). Paris: ELRA - European Language Resources Association.
  • Wittenburg, P. (2004). Training Course in Lithuania. Language Archive Newsletter, 1(2), 6-6.
  • Wittenburg, P., Brugman, H., Broeder, D., & Russel, A. (2004). XML-based language archiving. In Workshop Proceedings on XML-based Richly Annotaded Corpora (LREC2004) (pp. 63-69). Paris: ELRA - European Language Resources Association.
  • Wittenburg, P., Trilsbeek, P., & Wittenburg, F. (2014). Corpus archiving and dissemination. In J. Durand, U. Gut, & G. Kristoffersen (Eds.), The Oxford Handbook of Corpus Phonology (pp. 133-149). Oxford: Oxford University Press.
  • Wittenburg, P., Gulrajani, G., Broeder, D., & Uneson, M. (2004). Cross-disciplinary integration of metadata descriptions. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC2004) (pp. 113-116). Paris: ELRA - European Language Resources Association.
  • Wittenburg, P., Dirksmeyer, R., Brugman, H., & Klaas, G. (2004). Digital formats for images, audio and video. Language Archive Newsletter, 1(1), 3-6.
  • Wittenburg, P., Johnson, H., Buchhorn, M., Brugman, H., & Broeder, D. (2004). Architecture for distributed language resource management and archiving. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC2004) (pp. 361-364). Paris: ELRA - European Language Resources Association.
  • Wittenburg, P. (2004). International Expert Meeting on Access Management for Distributed Language Archives. Language Archive Newsletter, 1(3), 12-12.
  • Wittenburg, P. (2004). Final review of INTERA. Language Archive Newsletter, 1(4), 11-12.
  • Wittenburg, P. (2004). LinguaPax Forum on Language Diversity, Sustainability, and Peace. Language Archive Newsletter, 1(3), 13-13.
  • Wittenburg, P. (2004). LREC conference 2004. Language Archive Newsletter, 1(3), 12-13.
  • Wittenburg, P. (2004). News from the Archive of the Max Planck Institute for Psycholinguistics. Language Archive Newsletter, 1(4), 12-12.
  • Wnuk, E., & Burenhult, N. (2014). Contact and isolation in hunter-gatherer language dynamics: Evidence from Maniq phonology (Aslian, Malay Peninsula). Studies in Language, 38(4), 956-981. doi:10.1075/sl.38.4.06wnu.
  • Wnuk, E., & Majid, A. (2014). Revisiting the limits of language: The odor lexicon of Maniq. Cognition, 131, 125-138. doi:10.1016/j.cognition.2013.12.008.

    Abstract

    It is widely believed that human languages cannot encode odors. While this is true for English,
    and other related languages, data from some non-Western languages challenge this
    view. Maniq, a language spoken by a small population of nomadic hunter–gatherers in
    southern Thailand, is such a language. It has a lexicon of over a dozen terms dedicated
    to smell. We examined the semantics of these smell terms in 3 experiments (exemplar
    listing, similarity judgment and off-line rating). The exemplar listing task confirmed that
    Maniq smell terms have complex meanings encoding smell qualities. Analyses of the
    similarity data revealed that the odor lexicon is coherently structured by two dimensions.
    The underlying dimensions are pleasantness and dangerousness, as verified by the off-line
    rating study. Ethnographic data illustrate that smell terms have detailed semantics tapping
    into broader cultural constructs. Contrary to the widespread view that languages cannot
    encode odors, the Maniq data show odor can be a coherent semantic domain, thus shedding
    new light on the limits of language.
  • Wolf, M. C., Smith, A. C., Meyer, A. S., & Rowland, C. F. (2019). Modality effects in vocabulary acquisition. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 1212-1218). Montreal, QB: Cognitive Science Society.

    Abstract

    It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials.
  • Wolf, M. C., Muijselaar, M. M. L., Boonstra, A. M., & De Bree, E. H. (2019). The relationship between reading and listening comprehension: Shared and modality-specific components. Reading and Writing, 32(7), 1747-1767. doi:10.1007/s11145-018-9924-8.

    Abstract

    This study aimed to increase our understanding on the relationship between reading and listening comprehension. Both in comprehension theory and in educational practice, reading and listening comprehension are often seen as interchangeable, overlooking modality-specific aspects of them separately. Three questions were addressed. First, it was examined to what extent reading and listening comprehension comprise modality-specific, distinct skills or an overlapping, domain-general skill in terms of the amount of explained variance in one comprehension type by the opposite comprehension type. Second, general and modality-unique subskills of reading and listening comprehension were sought by assessing the contributions of the foundational skills word reading fluency, vocabulary, memory, attention, and inhibition to both comprehension types. Lastly, the practice of using either listening comprehension or vocabulary as a proxy of general comprehension was investigated. Reading and listening comprehension tasks with the same format were assessed in 85 second and third grade children. Analyses revealed that reading comprehension explained 34% of the variance in listening comprehension, and listening comprehension 40% of reading comprehension. Vocabulary and word reading fluency were found to be shared contributors to both reading and listening comprehension. None of the other cognitive skills contributed significantly to reading or listening comprehension. These results indicate that only part of the comprehension process is indeed domain-general and not influenced by the modality in which the information is provided. Especially vocabulary seems to play a large role in this domain-general part. The findings warrant a more prominent focus of modality-specific aspects of both reading and listening comprehension in research and education.
  • Wright, S. E., Windhouwer, M., Schuurman, I., & Broeder, D. (2014). Segueing from a Data Category Registry to a Data Concept Registry. In Proceedings of the 11th International Conference on Terminology and Knowledge Engineering (TKE 2014).

    Abstract

    The terminology Community of Practice has long standardized data categories in the framework of ISO TC 37. ISO 12620:2009 specifies the data model and procedures for a Data Category Registry (DCR), which has been implemented by the Max Planck Institute for Psycholinguistics as the ISOcat DCR. The DCR has been used by not only ISO TC 37, but also by the CLARIN research infra-structure. This paper describes how the needs of these communities have started to diverge and the process of segueing from a DCR to a Data Concept Registry in order to meet the needs of both communities.
  • Yang, A., & Chen, A. (2014). Prosodic focus marking in child and adult Mandarin Chinese. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 54-58).

    Abstract

    This study investigates how Mandarin Chinese speaking children and adults use prosody to mark focus in spontaneous speech. SVO sentences were elicited from 4- and 8-year-olds and adults in a game setting. Sentence-medial verbs were acoustically analysed for both duration and pitch range in different focus conditions. We have found that like the adults, the 8-year-olds used both duration and pitch range to distinguish focus from non-focus. The 4-year-olds used only duration to distinguish focus from non-focus, unlike the adults and 8-year-olds. None of the three groups of speakers distinguished contrastive focus from non-contrastive focus using pitch range or duration. Regarding the distinction between narrow focus from broad focus, the 4- and 8-year-olds used both pitch range and duration for this purpose, while the adults used only duration
  • Yang, A., & Chen, A. (2014). Prosodic focus-marking in Chinese four- and eight-year-olds. In N. Campbell, D. Gibbon, & D. Hirst (Eds.), Proceedings of Speech Prosody 2014 (pp. 713-717).

    Abstract

    This study investigates how Mandarin Chinese speaking children use prosody to distinguish focus from non-focus, and focus types differing in size of constituent and contrastivity. SVO sentences were elicited from four- and eight-year-olds in a game setting. Sentence-medial verbs were acoustically analysed for both duration and pitch range in different focus conditions. The children started to use duration to differentiate focus from non-focus at the age of four. But their use of pitch range varied with age and depended on non-focus conditions (pre- vs. postfocus) and the lexical tones of the verbs. Further, the children in both age groups used pitch range but not duration to differentiate narrow focus from broad focus, and they did not differentiate contrastive narrow focus from non-contrastive narrow focus using duration or pitch range. The results indicated that Chinese children acquire the prosodic means (duration and pitch range) of marking focus in stages, and their acquisition of these two means appear to be early, compared to children speaking an intonation language, for example, Dutch.
  • Yang, Y., Dai, B., Howell, P., Wang, X., Li, K., & Lu, C. (2014). White and Grey Matter Changes in the Language Network during Healthy Aging. PLoS One, 9(9): e108077. doi: 10.1371/journal.pone.0108077.

    Abstract

    Neural structures change with age but there is no consensus on the exact processes involved. This study tested the hypothesis that white and grey matter in the language network changes during aging according to a “last in, first out” process. The fractional anisotropy (FA) of white matter and cortical thickness of grey matter were measured in 36 participants whose ages ranged from 55 to 79 years. Within the language network, the dorsal pathway connecting the mid-to-posterior superior temporal cortex (STC) and the inferior frontal cortex (IFC) was affected more by aging in both FA and thickness than the other dorsal pathway connecting the STC with the premotor cortex and the ventral pathway connecting the mid-to-anterior STC with the ventral IFC. These results were independently validated in a second group of 20 participants whose ages ranged from 50 to 73 years. The pathway that is most affected during aging matures later than the other two pathways (which are present at birth). The results are interpreted as showing that the neural structures which mature later are affected more than those that mature earlier, supporting the “last in, first out” theory.
  • Zampieri, M., & Gebre, B. G. (2014). VarClass: An open-source language identification tool for language varieties. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 3305-3308).

    Abstract

    This paper presents VarClass, an open-source tool for language identification available both to be downloaded as well as through a graphical user-friendly interface. The main difference of VarClass in comparison to other state-of-the-art language identification tools is its focus on language varieties. General purpose language identification tools do not take language varieties into account and our work aims to fill this gap. VarClass currently contains language models for over 27 languages in which 10 of them are language varieties. We report an average performance of over 90.5% accuracy in a challenging dataset. More language models will be included in the upcoming months
  • Zavala, R. (1997). Functional analysis of Akatek voice constructions. International Journal of American Linguistics, 63(4), 439-474.

    Abstract

    L'A. étudie les corrélations entre structure syntaxique et fonction pragmatique dans les alternances de voix en akatek, une langue maya appartenant au sous-groupe Q'anjob'ala. Les alternances pragmatiques de voix sont les mécanismes par lesquels les langues encodent les différents degrés de topicalité des deux principaux participants d'un événement sémantiquement transitif, l'agent et le patient. A l'aide d'une analyse quantitative, l'A. évalue la topicalité de ces participants et identifie les structures syntaxiques permettant d'exprimer les quatre principales fonctions de voix en akatek : active-directe, inverse, passive et antipassive
  • Zavala, R. M. (1999). External possessor in Oluta Popoluca (Mixean): Applicatives and incorporation of relational terms. In D. L. Payne, & I. Barshi (Eds.), External possession (pp. 339-372). Amsterdam: Benjamins.
  • Zeshan, U. (2004). Basic English course taught in Indian Sign Language (Ali Yavar Young National Institute for Hearing Handicapped, Ed.). National Institute for the Hearing Handicapped: Mumbai.
  • Zeshan, U. (2004). Interrogative constructions in sign languages - Cross-linguistic perspectives. Language, 80(1), 7-39.

    Abstract

    This article reports on results from a broad crosslinguistic study based on data from thirty-five signed languages around the world. The study is the first of its kind, and the typological generalizations presented here cover the domain of interrogative structures as they appear across a wide range of geographically and genetically distinct signed languages. Manual and nonmanual ways of marking basic types of questions in signed languages are investigated. As a result, it becomes clear that the range of crosslinguistic variation is extensive for some subparameters, such as the structure of question-word paradigms, while other parameters, such as the use of nonmanual expressions in questions, show more similarities across signed languages. Finally, it is instructive to compare the findings from signed language typology to relevant data from spoken languages at a more abstract, crossmodality level.
  • Zeshan, U. (2004). Hand, head and face - negative constructions in sign languages. Linguistic Typology, 8(1), 1-58. doi:10.1515/lity.2004.003.

    Abstract

    This article presents a typology of negative constructions across a substantial number of sign languages from around the globe. After situating the topic within the wider context of linguistic typology, the main negation strategies found across sign languages are described. Nonmanual negation includes the use of head movements and facial expressions for negation and is of great importance in sign languages as well as particularly interesting from a typological point of view. As far as manual signs are concerned, independent negative particles represent the dominant strategy, but there are also instances of irregular negation in most sign languages. Irregular negatives may take the form of suppletion, cliticisation, affixing, or internal modification of a sign. The results of the study lead to interesting generalisations about similarities and differences between negatives in signed and spoken languages.
  • Zhang, Y., Chen, C.-h., & Yu, C. (2019). Mechanisms of cross-situational learning: Behavioral and computational evidence. In Advances in Child Development and Behavior; vol. 56 (pp. 37-63).

    Abstract

    Word learning happens in everyday contexts with many words and many potential referents for those words in view at the same time. It is challenging for young learners to find the correct referent upon hearing an unknown word at the moment. This problem of referential uncertainty has been deemed as the crux of early word learning (Quine, 1960). Recent empirical and computational studies have found support for a statistical solution to the problem termed cross-situational learning. Cross-situational learning allows learners to acquire word meanings across multiple exposures, despite each individual exposure is referentially uncertain. Recent empirical research shows that infants, children and adults rely on cross-situational learning to learn new words (Smith & Yu, 2008; Suanda, Mugwanya, & Namy, 2014; Yu & Smith, 2007). However, researchers have found evidence supporting two very different theoretical accounts of learning mechanisms: Hypothesis Testing (Gleitman, Cassidy, Nappa, Papafragou, & Trueswell, 2005; Markman, 1992) and Associative Learning (Frank, Goodman, & Tenenbaum, 2009; Yu & Smith, 2007). Hypothesis Testing is generally characterized as a form of learning in which a coherent hypothesis regarding a specific word-object mapping is formed often in conceptually constrained ways. The hypothesis will then be either accepted or rejected with additional evidence. However, proponents of the Associative Learning framework often characterize learning as aggregating information over time through implicit associative mechanisms. A learner acquires the meaning of a word when the association between the word and the referent becomes relatively strong. In this chapter, we consider these two psychological theories in the context of cross-situational word-referent learning. By reviewing recent empirical and cognitive modeling studies, our goal is to deepen our understanding of the underlying word learning mechanisms by examining and comparing the two theoretical learning accounts.
  • Zheng, X., & Lemhöfer, K. (2019). The “semantic P600” in second language processing: When syntax conflicts with semantics. Neuropsychologia, 127, 131-147. doi:10.1016/j.neuropsychologia.2019.02.010.

    Abstract

    In sentences like “the mouse that chased the cat was hungry”, the syntactically correct interpretation (the mouse chases the cat) is contradicted by semantic and pragmatic knowledge. Previous research has shown that L1 speakers sometimes base sentence interpretation on this type of knowledge (so-called “shallow” or “good-enough” processing). We made use of both behavioural and ERP measurements to investigate whether L2 learners differ from native speakers in the extent to which they engage in “shallow” syntactic processing. German learners of Dutch as well as Dutch native speakers read sentences containing relative clauses (as in the example above) for which the plausible thematic roles were or were not reversed, and made plausibility judgments. The results show that behaviourally, L2 learners had more difficulties than native speakers to discriminate plausible from implausible sentences. In the ERPs, we replicated the previously reported finding of a “semantic P600” for semantic reversal anomalies in native speakers, probably reflecting the effort to resolve the syntax-semantics conflict. In L2 learners, though, this P600 was largely attenuated and surfaced only in those trials that were judged correctly for plausibility. These results generally point at a more prevalent, but not exclusive occurrence of shallow syntactic processing in L2 learners.
  • Zhou, W., & Broersma, M. (2014). Perception of birth language tone contrasts by adopted Chinese children. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 63-66).

    Abstract

    The present study investigates how long after adoption adoptees forget the phonology of their birth language. Chinese children who were adopted by Dutch families were tested on the perception of birth language tone contrasts before, during, and after perceptual training. Experiment 1 investigated Cantonese tone 2 (High-Rising) and tone 5 (Low-Rising), and Experiment 2 investigated Mandarin tone 2 (High-Rising) and tone 3 (Low-Dipping). In both experiments, participants were adoptees and non-adopted Dutch controls. Results of both experiments show that the tone contrasts were very difficult to perceive for the adoptees, and that adoptees were not better at perceiving the tone contrasts than their non-adopted Dutch peers, before or after training. This demonstrates that forgetting took place relatively soon after adoption, and that the re-exposure that the adoptees were presented with did not lead to an improvement greater than that of the Dutch control participants. Thus, the findings confirm what has been anecdotally reported by adoptees and their parents, but what had not been empirically tested before, namely that birth language forgetting occurs very soon after adoption
  • Zhu, Z., Bastiaansen, M. C. M., Hakun, J. G., Petersson, K. M., Wang, S., & Hagoort, P. (2019). Semantic unification modulates N400 and BOLD signal change in the brain: A simultaneous EEG-fMRI study. Journal of Neurolinguistics, 52: 100855. doi:10.1016/j.jneuroling.2019.100855.

    Abstract

    Semantic unification during sentence comprehension has been associated with amplitude change of the N400 in event-related potential (ERP) studies, and activation in the left inferior frontal gyrus (IFG) in functional magnetic resonance imaging (fMRI) studies. However, the specificity of this activation to semantic unification remains unknown. To more closely examine the brain processes involved in semantic unification, we employed simultaneous EEG-fMRI to time-lock the semantic unification related N400 change, and integrated trial-by-trial variation in both N400 and BOLD change beyond the condition-level BOLD change difference measured in traditional fMRI analyses. Participants read sentences in which semantic unification load was parametrically manipulated by varying cloze probability. Separately, ERP and fMRI results replicated previous findings, in that semantic unification load parametrically modulated the amplitude of N400 and cortical activation. Integrated EEG-fMRI analyses revealed a different pattern in which functional activity in the left IFG and bilateral supramarginal gyrus (SMG) was associated with N400 amplitude, with the left IFG activation and bilateral SMG activation being selective to the condition-level and trial-level of semantic unification load, respectively. By employing the EEG-fMRI integrated analyses, this study among the first sheds light on how to integrate trial-level variation in language comprehension.
  • Zora, H., Riad, T., & Ylinen, S. (2019). Prosodically controlled derivations in the mental lexicon. Journal of Neurolinguistics, 52: 100856. doi:10.1016/j.jneuroling.2019.100856.

    Abstract

    Swedish morphemes are classified as prosodically specified or prosodically unspecified, depending on lexical or phonological stress, respectively. Here, we investigate the allomorphy of the suffix -(i)sk, which indicates the distinction between lexical and phonological stress; if attached to a lexically stressed morpheme, it takes a non-syllabic form (-sk), whereas if attached to a phonologically stressed morpheme, an epenthetic vowel is inserted (-isk). Using mismatch negativity (MMN), we explored the neural processing of this allomorphy across lexically stressed and phonologically stressed morphemes. In an oddball paradigm, participants were occasionally presented with congruent and incongruent derivations, created by the suffix -(i)sk, within the repetitive presentation of their monomorphemic stems. The results indicated that the congruent derivation of the lexically stressed stem elicited a larger MMN than the incongruent sequences of the same stem and the derivational suffix, whereas after the phonologically stressed stem a non-significant tendency towards an opposite pattern was observed. We argue that the significant MMN response to the congruent derivation in the lexical stress condition is in line with lexical MMN, indicating a holistic processing of the sequence of lexically stressed stem and derivational suffix. The enhanced MMN response to the incongruent derivation in the phonological stress condition, on the other hand, is suggested to reflect combinatorial processing of the sequence of phonologically stressed stem and derivational suffix. These findings bring a new aspect to the dual-system approach to neural processing of morphologically complex words, namely the specification of word stress.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Slow naming of pictures facilitates memory for their names. Psychonomic Bulletin & Review, 26(5), 1675-1682. doi:10.3758/s13423-019-01620-x.

    Abstract

    Speakers remember their own utterances better than those of their interlocutors, suggesting that language production is beneficial to memory. This may be partly explained by a generation effect: The act of generating a word is known to lead to a memory advantage (Slamecka & Graf, 1978). In earlier work, we showed a generation effect for recognition of images (Zormpa, Brehm, Hoedemaker, & Meyer, 2019). Here, we tested whether the recognition of their names would also benefit from name generation. Testing whether picture naming improves memory for words was our primary aim, as it serves to clarify whether the representations affected by generation are visual or conceptual/lexical. A secondary aim was to assess the influence of processing time on memory. Fifty-one participants named pictures in three conditions: after hearing the picture name (identity condition), backward speech, or an unrelated word. A day later, recognition memory was tested in a yes/no task. Memory in the backward speech and unrelated conditions, which required generation, was superior to memory in the identity condition, which did not require generation. The time taken by participants for naming was a good predictor of memory, such that words that took longer to be retrieved were remembered better. Importantly, that was the case only when generation was required: In the no-generation (identity) condition, processing time was not related to recognition memory performance. This work has shown that generation affects conceptual/lexical representations, making an important contribution to the understanding of the relationship between memory and language.
  • Zormpa, E., Brehm, L., Hoedemaker, R. S., & Meyer, A. S. (2019). The production effect and the generation effect improve memory in picture naming. Memory, 27(3), 340-352. doi:10.1080/09658211.2018.1510966.

    Abstract

    The production effect (better memory for words read aloud than words read silently) and the picture superiority effect (better memory for pictures than words) both improve item memory in a picture naming task (Fawcett, J. M., Quinlan, C. K., & Taylor, T. L. (2012). Interplay of the production and picture superiority effects: A signal detection analysis. Memory (Hove, England), 20(7), 655–666. doi:10.1080/09658211.2012.693510). Because picture naming requires coming up with an appropriate label, the generation effect (better memory for generated than read words) may contribute to the latter effect. In two forced-choice memory experiments, we tested the role of generation in a picture naming task on later recognition memory. In Experiment 1, participants named pictures silently or aloud with the correct name or an unreadable label superimposed. We observed a generation effect, a production effect, and an interaction between the two. In Experiment 2, unreliable labels were included to ensure full picture processing in all conditions. In this experiment, we observed a production and a generation effect but no interaction, implying the effects are dissociable. This research demonstrates the separable roles of generation and production in picture naming and their impact on memory. As such, it informs the link between memory and language production and has implications for memory asymmetries between language production and comprehension.

    Additional information

    pmem_a_1510966_sm9257.pdf
  • De Zubicaray, G. I., Hartsuiker, R. J., & Acheson, D. J. (2014). Mind what you say—general and specific mechanisms for monitoring in speech production. Frontiers in Human Neuroscience, 8: 514. doi:10.3389%2Ffnhum.2014.00514.

    Abstract

    For most people, speech production is relatively effortless and error-free. Yet it has long been recognized that we need some type of control over what we are currently saying and what we plan to say. Precisely how we monitor our internal and external speech has been a topic of research interest for several decades. The predominant approach in psycholinguistics has assumed monitoring of both is accomplished via systems responsible for comprehending others' speech.

    This special topic aimed to broaden the field, firstly by examining proposals that speech production might also engage more general systems, such as those involved in action monitoring. A second aim was to examine proposals for a production-specific, internal monitor. Both aims require that we also specify the nature of the representations subject to monitoring.
  • Zuidema, W., & Fitz, H. (2019). Key issues and future directions: Models of human language and speech processing. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 353-358). Cambridge, MA: MIT Press.
  • Zumer, J. M., Scheeringa, R., Schoffelen, J.-M., Norris, D. G., & Jensen, O. (2014). Occipital alpha activity during stimulus processing gates the information flow to object-selective cortex. PLoS Biology, 12(10): e1001965. doi:10.1371/journal.pbio.1001965.

    Abstract

    Given the limited processing capabilities of the sensory system, it is essential that attended information is gated to downstream areas, whereas unattended information is blocked. While it has been proposed that alpha band (8–13 Hz) activity serves to route information to downstream regions by inhibiting neuronal processing in task-irrelevant regions, this hypothesis remains untested. Here we investigate how neuronal oscillations detected by electroencephalography in visual areas during working memory encoding serve to gate information reflected in the simultaneously recorded blood-oxygenation-level-dependent (BOLD) signals recorded by functional magnetic resonance imaging in downstream ventral regions. We used a paradigm in which 16 participants were presented with faces and landscapes in the right and left hemifields; one hemifield was attended and the other unattended. We observed that decreased alpha power contralateral to the attended object predicted the BOLD signal representing the attended object in ventral object-selective regions. Furthermore, increased alpha power ipsilateral to the attended object predicted a decrease in the BOLD signal representing the unattended object. We also found that the BOLD signal in the dorsal attention network inversely correlated with visual alpha power. This is the first demonstration, to our knowledge, that oscillations in the alpha band are implicated in the gating of information from the visual cortex to the ventral stream, as reflected in the representationally specific BOLD signal. This link of sensory alpha to downstream activity provides a neurophysiological substrate for the mechanism of selective attention during stimulus processing, which not only boosts the attended information but also suppresses distraction. Although previous studies have shown a relation between the BOLD signal from the dorsal attention network and the alpha band at rest, we demonstrate such a relation during a visuospatial task, indicating that the dorsal attention network exercises top-down control of visual alpha activity.
  • Zwitserlood, I. (2014). Meaning at the feature level in sign languages. The case of name signs in Sign Language of the Netherlands (NGT). In R. Kager (Ed.), Where the Principles Fail. A Festschrift for Wim Zonneveld on the occasion of his 64th birthday (pp. 241-251). Utrecht: Utrecht Institute of Linguistics OTS.

Share this page