Publications

Displaying 801 - 831 of 831
  • Warner, N., Smits, R., McQueen, J. M., & Cutler, A. (2005). Phonological and statistical effects on timing of speech perception: Insights from a database of Dutch diphone perception. Speech Communication, 46(1), 53-72. doi:10.1016/j.specom.2005.01.003.

    Abstract

    We report detailed analyses of a very large database on timing of speech perception collected by Smits et al. (Smits, R., Warner, N., McQueen, J.M., Cutler, A., 2003. Unfolding of phonetic information over time: A database of Dutch diphone perception. J. Acoust. Soc. Am. 113, 563–574). Eighteen listeners heard all possible diphones of Dutch, gated in portions of varying size and presented without background noise. The present report analyzes listeners’ responses across gates in terms of phonological features (voicing, place, and manner for consonants; height, backness, and length for vowels). The resulting patterns for feature perception differ from patterns reported when speech is presented in noise. The data are also analyzed for effects of stress and of phonological context (neighboring vowel vs. consonant); effects of these factors are observed to be surprisingly limited. Finally, statistical effects, such as overall phoneme frequency and transitional probabilities, along with response biases, are examined; these too exercise only limited effects on response patterns. The results suggest highly accurate speech perception on the basis of acoustic information alone.
  • Warner, N., Kim, J., Davis, C., & Cutler, A. (2005). Use of complex phonological patterns in speech processing: Evidence from Korean. Journal of Linguistics, 41(2), 353-387. doi:10.1017/S0022226705003294.

    Abstract

    Korean has a very complex phonology, with many interacting alternations. In a coronal-/i/ sequence, depending on the type of phonological boundary present, alternations such as palatalization, nasal insertion, nasal assimilation, coda neutralization, and intervocalic voicing can apply. This paper investigates how the phonological patterns of Korean affect processing of morphemes and words. Past research on languages such as English, German, Dutch, and Finnish has shown that listeners exploit syllable structure constraints in processing speech and segmenting it into words. The current study shows that in parsing speech, listeners also use much more complex patterns that relate the surface phonological string to various boundaries.
  • Wassenaar, M., & Hagoort, P. (2005). Word-category violations in patients with Broca's aphasia: An ERP study. Brain and Language, 92, 117-137. doi:10.1016/j.bandl.2004.05.011.

    Abstract

    An event-related brain potential experiment was carried out to investigate on-line syntactic processing in patients with Broca’s aphasia. Subjects were visually presented with sentences that were either syntactically correct or contained violations of word-category. Three groups of subjects were tested: Broca patients (N=11), non-aphasic patients with a right hemisphere (RH) lesion (N=9), and healthy aged-matched controls (N=15). Both control groups appeared sensitive to the violations of word-category as shown by clear P600/SPS effects. The Broca patients displayed only a very reduced and delayed P600/SPS effect. The results are discussed in the context of a lexicalist parsing model. It is concluded that Broca patients are hindered to detect on-line violations of word-category, if word class information is incomplete or delayed available.
  • Wassenaar, M. (2005). Agrammatic comprehension: An electrophysiological approach. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.60340.

    Abstract

    This dissertation focuses on syntactic comprehension problems in patients with Broca's aphasia from an electrophysiological perspective. The central objective of this dissertation was to further explore what syntax-related event-related brain potential (ERP) effects can reveal about the nature of the deficit that underlies syntactic comprehension problems in patients with Broca's aphasia. Chapter two to four describe experiments in which event-related brain potentials were recorded while subjects (Broca patients, non-aphasic patients with a right-hemisphere lesion, and healthy elderly controls) were presented with sentences that contained either violations of syntactic constraints or were syntactically correct. Chapter two investigates ERP effects of subject-verb agreement violations in the different subject groups, and seeks to answer the following questions: Do agrammatic comprehenders show sensitivity to subject-verb agreement violations as indicated by a syntax-related ERP effect? In addition, does the severity of the syntactic comprehension impairment in the Broca patients affect the ERP responses? Chapter three describes an investigation of whether Broca patients show sensitivity to violations of word order as indicated by a syntax-related ERP effect, and whether the ERP responses in the Broca patients are affected by the severity of their syntactic comprehension impairment. Chapter four reports on ERP effects of violations of word-category. In addition, also a semantic violation condition was added to track possible dissociations in the sensitivity to semantic and syntactic information in the Broca patients. Chapter five describes the development of a paradigm in which the electrophysiological approach and the classical sentence-picture matching approach are combined. In this chapter, the ERP method is applied to study on-line thematic role assignment in Broca patients during sentence-picture matching. Also the relation between ERP effects and behavioral responses is pursued. Finally, Chapter 6 provides a summary of the main findings of the experiments and a general discussion.

    Additional information

    full text via Radboud Repository
  • Weber, A. (1998). Listening to nonnative language which violates native assimilation rules. In D. Duez (Ed.), Proceedings of the European Scientific Communication Association workshop: Sound patterns of Spontaneous Speech (pp. 101-104).

    Abstract

    Recent studies using phoneme detection tasks have shown that spoken-language processing is neither facilitated nor interfered with by optional assimilation, but is inhibited by violation of obligatory assimilation. Interpretation of these results depends on an assessment of their generality, specifically, whether they also obtain when listeners are processing nonnative language. Two separate experiments are presented in which native listeners of German and native listeners of Dutch had to detect a target fricative in legal monosyllabic Dutch nonwords. All of the nonwords were correct realisations in standard Dutch. For German listeners, however, half of the nonwords contained phoneme strings which violate the German fricative assimilation rule. Whereas the Dutch listeners showed no significant effects, German listeners detected the target fricative faster when the German fricative assimilation was violated than when no violation occurred. The results might suggest that violation of assimilation rules does not have to make processing more difficult per se.
  • Weber, A. (2000). Phonotactic and acoustic cues for word segmentation in English. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP 2000) (pp. 782-785).

    Abstract

    This study investigates the influence of both phonotactic and acoustic cues on the segmentation of spoken English. Listeners detected embedded English words in nonsense sequences (word spotting). Words aligned with phonotactic boundaries were easier to detect than words without such alignment. Acoustic cues to boundaries could also have signaled word boundaries, especially when word onsets lacked phonotactic alignment. However, only one of several durational boundary cues showed a marginally significant correlation with response times (RTs). The results suggest that word segmentation in English is influenced primarily by phonotactic constraints and only secondarily by acoustic aspects of the speech signal.
  • Weber, A. (2000). The role of phonotactics in the segmentation of native and non-native continuous speech. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP, Workshop on Spoken Word Access Processes. Nijmegen: MPI for Psycholinguistics.

    Abstract

    Previous research has shown that listeners make use of their knowledge of phonotactic constraints to segment speech into individual words. The present study investigates the influence of phonotactics when segmenting a non-native language. German and English listeners detected embedded English words in nonsense sequences. German listeners also had knowledge of English, but English listeners had no knowledge of German. Word onsets were either aligned with a syllable boundary or not, according to the phonotactics of the two languages. Words aligned with either German or English phonotactic boundaries were easier for German listeners to detect than words without such alignment. Responses of English listeners were influenced primarily by English phonotactic alignment. The results suggest that both native and non-native phonotactic constraints influence lexical segmentation of a non-native, but familiar, language.
  • Wegener, C. (2005). Major word classes in Savosavo. Grazer Linguistische Studien, 64, 29-52.
  • Whitmarsh, S., Udden, J., Barendregt, H., & Petersson, K. M. (2013). Mindfulness reduces habitual responding based on implicit knowledge: Evidence from artificial grammar learning. Consciousness and Cognition, (3), 833-845. doi:10.1016/j.concog.2013.05.007.

    Abstract

    Participants were unknowingly exposed to complex regularities in a working memory task. The existence of implicit knowledge was subsequently inferred from a preference for stimuli with similar grammatical regularities. Several affective traits have been shown to influence
    AGL performance positively, many of which are related to a tendency for automatic responding. We therefore tested whether the mindfulness trait predicted a reduction of grammatically congruent preferences, and used emotional primes to explore the influence of affect. Mindfulness was shown to correlate negatively with grammatically congruent responses. Negative primes were shown to result in faster and more negative evaluations.
    We conclude that grammatically congruent preference ratings rely on habitual responses, and that our findings provide empirical evidence for the non-reactive disposition of the mindfulness trait.
  • Willems, R. M. (2013). Can literary studies contribute to cognitive neuroscience? Journal of literary semantics, 42(2), 217-222. doi:10.1515/jls-2013-0011.
  • Windhouwer, M., Petro, J., Newskaya, I., Drude, S., Aristar-Dry, H., & Gippert, J. (2013). Creating a serialization of LMF: The experience of the RELISH project. In G. Francopoulo (Ed.), LMF - Lexical Markup Framework (pp. 215-226). London: Wiley.
  • Windhouwer, M., & Wright, S. E. (2013). LMF and the Data Category Registry: Principles and application. In G. Francopoulo (Ed.), LMF: Lexical Markup Framework (pp. 41-50). London: Wiley.
  • Wittek, A. (1998). Learning verb meaning via adverbial modification: Change-of-state verbs in German and the adverb "wieder" again. In A. Greenhill, M. Hughes, H. Littlefield, & H. Walsh (Eds.), Proceedings of the 22nd Annual Boston University Conference on Language Development (pp. 779-790). Somerville, MA: Cascadilla Press.
  • Witteman, M. J. (2013). Lexical processing of foreign-accented speech: Rapid and flexible adaptation. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2013). Foreign accent strength and listener familiarity with an accent co-determine speed of perceptual adaptation. Attention, Perception & Psychophysics, 75, 537-556. doi:10.3758/s13414-012-0404-y.

    Abstract

    We investigated how the strength of a foreign accent and varying types of experience with foreign-accented speech influence the recognition of accented words. In Experiment 1, native Dutch listeners with limited or extensive prior experience with German-accented Dutch completed a cross-modal priming experiment with strongly, medium, and weakly accented words. Participants with limited experience were primed by the medium and weakly accented words, but not by the strongly accented words. Participants with extensive experience were primed by all accent types. In Experiments 2 and 3, Dutch listeners with limited experience listened to a short story before doing the cross-modal priming task. In Experiment 2, the story was spoken by the priming task speaker and either contained strongly accented words or did not. Strongly accented exposure led to immediate priming by novel strongly accented words, while exposure to the speaker without strongly accented tokens led to priming only in the experiment’s second half. In Experiment 3, listeners listened to the story with strongly accented words spoken by a different German-accented speaker. Listeners were primed by the strongly accented words, but again only in the experiment’s second half. Together, these results show that adaptation to foreign-accented speech is rapid but depends on accent strength and on listener familiarity with those strongly accented words.
  • Wittenburg, P., Skiba, R., & Trilsbeek, P. (2005). The language archive at the MPI: Contents, tools, and technologies. Language Archives Newsletter, 5, 7-9.
  • Wittenburg, P., & Ringersma, J. (2013). Metadata description for lexicons. In R. H. Gouws, U. Heid, W. Schweickard, & H. E. Wiegand (Eds.), Dictionaries: An international encyclopedia of lexicography: Supplementary volume: Recent developments with focus on electronic and computational lexicography (pp. 1329-1335). Berlin: Mouton de Gruyter.
  • Wohlgemuth, J., & Dirksmeyer, T. (Eds.). (2005). Bedrohte Vielfalt. Aspekte des Sprach(en)tods – Aspects of language death. Berlin: Weißensee.

    Abstract

    About 5,000 languages are spoken in the world today. More than half of them have less than 10,000 speakers, a quarter of them even fewer than 1,000. The majority of these “small” languages will not live to see the end of this century; some estimates predict that no more than a dozen languages will still be spoken by the turn of the next millennium. This collection of papers approaches the subject of language extinction through five major topics: general aspects of language death, case studies, endangered subsystems, language protection and revitalization, language ecology. In 24 articles, the authors address the causes, manifestations, and consequences of language endangerment and extinction as well as the linguistic and social changes associated with it, drawing examples from a large number of languages.
  • Wright, S. E., Windhouwer, M., Schuurman, I., & Kemps-Snijders, M. (2013). Community efforts around the ISOcat Data Category Registry. In I. Gurevych, & J. Kim (Eds.), The People's Web meets NLP: Collaboratively constructed language resources (pp. 349-374). New York: Springer.

    Abstract

    The ISOcat Data Category Registry provides a community computing environment for creating, storing, retrieving, harmonizing and standardizing data category specifications (DCs), used to register linguistic terms used in various fields. This chapter recounts the history of DC documentation in TC 37, beginning from paper-based lists created for lexicographers and terminologists and progressing to the development of a web-based resource for a much broader range of users. While describing the considerable strides that have been made to collect a very large comprehensive collection of DCs, it also outlines difficulties that have arisen in developing a fully operative web-based computing environment for achieving consensus on data category names, definitions, and selections and describes efforts to overcome some of the present shortcomings and to establish positive working procedures designed to engage a wide range of people involved in the creation of language resources.
  • Wright, S. E., & Windhouwer, M. (2013). ISOcat - im Reich der Datenkategorien. eDITion: Fachzeitschrift für Terminologie, 9(1), 8-12.

    Abstract

    Im ISOcat-Datenkategorie-Register (Data Category Registry, www.isocat.org) des Technischen Komitees ISO/TC 37 (Terminology and other language and content resources) werden Feldnamen und Werte für Sprachressourcen beschrieben. Empfohlene Feldnamen und zuverlässige Definitionen sollen dazu beitragen, dass Sprachdaten unabhängig von Anwendungen, Plattformen und Communities of Practice (CoP) wiederverwendet werden können. Datenkategorie-Gruppen (Data Category Selections) können eingesehen, ausgedruckt, exportiert und nach kostenloser Registrierung auch neu erstellt werden.
  • Zavala, R. (2000). Multiple classifier systems in Akatek (Mayan). In G. Senft (Ed.), Systems of nominal classification (pp. 114-146). Cambridge University Press.
  • Zeshan, U. (2005). Sign languages. In M. Haspelmath, M. S. Dryer, D. Gil, & B. Comrie (Eds.), The world atlas of language structures (pp. 558-559). Oxford: Oxford University Press.
  • Zeshan, U. (2005). Question particles in sign languages. In M. Haspelmath, M. S. Dryer, D. Gil, & B. Comrie (Eds.), The world atlas of language structures (pp. 564-567). Oxford: Oxford University Press.
  • Zeshan, U., & Panda, S. (2005). Professional course in Indian sign language. Mumbai: Ali Yavar Jung National Institute for the Hearing Handicapped.
  • Zeshan, U., Pfau, R., & Aboh, E. (2005). When a wh-word is not a wh-word: the case of Indian sign language. In B. Tanmoy (Ed.), Yearbook of South Asian languages and linguistics 2005 (pp. 11-43). Berlin: Mouton de Gruyter.
  • Zeshan, U., Escobedo Delgado, C. E., Dikyuva, H., Panda, S., & De Vos, C. (2013). Cardinal numerals in rural sign languages: Approaching cross-modal typology. Linguistic Typology, 17(3), 357-396. doi:10.1515/lity-2013-0019.

    Abstract

    This article presents data on cardinal numerals in three sign languages from small-scale communities with hereditary deafness. The unusual features found in these data considerably extend the known range of typological variety across sign languages. Some features, such as non-decimal numeral bases, are unattested in sign languages, but familiar from spoken languages, while others, such as subtractive sub-systems, are rare in sign and speech. We conclude that for a complete typological appraisal of a domain, an approach to cross-modal typology, which includes a typologically diverse range of sign languages in addition to spoken languages, is both instructive and feasible.
  • Zeshan, U., Vasishta, M. N., & Sethna, M. (2005). Implementation of Indian Sign Language in educational settings. Asia Pacific Disability Rehabilitation Journal, 16(1), 16-40.

    Abstract

    This article reports on several sub-projects of research and development related to the use of Indian Sign Language in educational settings. In many countries around the world, sign languages are now recognised as the legitimate, full-fledged languages of the deaf communities that use them. In India, the development of sign language resources and their application in educational contexts, is still in its initial stages. The work reported on here, is the first principled and comprehensive effort of establishing educational programmes in Indian Sign Language at a national level. Programmes are of several types: a) Indian Sign Language instruction for hearing people; b) sign language teacher training programmes for deaf people; and c) educational materials for use in schools for the Deaf. The conceptual approach used in the programmes for deaf students is known as bilingual education, which emphasises the acquisition of a first language, Indian Sign Language, alongside the acquisition of spoken languages, primarily in their written form.
  • Zeshan, U. (2005). Irregular negatives in sign languages. In M. Haspelmath, M. S. Dryer, D. Gil, & B. Comrie (Eds.), The world atlas of language structures (pp. 560-563). Oxford: Oxford University Press.
  • Zhang, J., Bao, S., Furumai, R., Kucera, K. S., Ali, A., Dean, N. M., & Wang, X.-F. (2005). Protein phosphatase 5 is required for ATR-mediated checkpoint activation. Molecular and Cellular Biology, 25, 9910-9919. doi:10.1128/​MCB.25.22.9910-9919.2005.

    Abstract

    In response to DNA damage or replication stress, the protein kinase ATR is activated and subsequently transduces genotoxic signals to cell cycle control and DNA repair machinery through phosphorylation of a number of downstream substrates. Very little is known about the molecular mechanism by which ATR is activated in response to genotoxic insults. In this report, we demonstrate that protein phosphatase 5 (PP5) is required for the ATR-mediated checkpoint activation. PP5 forms a complex with ATR in a genotoxic stress-inducible manner. Interference with the expression or the activity of PP5 leads to impairment of the ATR-mediated phosphorylation of hRad17 and Chk1 after UV or hydroxyurea treatment. Similar results are obtained in ATM-deficient cells, suggesting that the observed defect in checkpoint signaling is the consequence of impaired functional interaction between ATR and PP5. In cells exposed to UV irradiation, PP5 is required to elicit an appropriate S-phase checkpoint response. In addition, loss of PP5 leads to premature mitosis after hydroxyurea treatment. Interestingly, reduced PP5 activity exerts differential effects on the formation of intranuclear foci by ATR and replication protein A, implicating a functional role for PP5 in a specific stage of the checkpoint signaling pathway. Taken together, our results suggest that PP5 plays a critical role in the ATR-mediated checkpoint activation.
  • De Zubicaray, G. I., Acheson, D. J., & Hartsuiker, R. J. (Eds.). (2013). Mind what you say - general and specific mechanisms for monitoring in speech production [Research topic] [Special Issue]. Frontiers in Human Neuroscience. Retrieved from http://www.frontiersin.org/human_neuroscience/researchtopics/mind_what_you_say_-_general_an/1197.

    Abstract

    Psycholinguistic research has typically portrayed speech production as a relatively automatic process. This is because when errors are made, they occur as seldom as one in every thousand words we utter. However, it has long been recognised that we need some form of control over what we are currently saying and what we plan to say. This capacity to both monitor our inner speech and self-correct our speech output has often been assumed to be a property of the language comprehension system. More recently, it has been demonstrated that speech production benefits from interfacing with more general cognitive processes such as selective attention, short-term memory (STM) and online response monitoring to resolve potential conflict and successfully produce the output of a verbal plan. The conditions and levels of representation according to which these more general planning, monitoring and control processes are engaged during speech production remain poorly understood. Moreover, there remains a paucity of information about their neural substrates, despite some of the first evidence of more general monitoring having come from electrophysiological studies of error related negativities (ERNs). While aphasic speech errors continue to be a rich source of information, there has been comparatively little research focus on instances of speech repair. The purpose of this Frontiers Research Topic is to provide a forum for researchers to contribute investigations employing behavioural, neuropsychological, electrophysiological, neuroimaging and virtual lesioning techniques. In addition, while the focus of the research topic is on novel findings, we welcome submission of computational simulations, review articles and methods papers.
  • Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Expression of multiple entities in Turkish Sign Language (TİD). In E. Arik (Ed.), Current Directions in Turkish Sign Language Research (pp. 272-302). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    This paper reports on an exploration of the ways in which multiple entities are expressed in Turkish Sign Language (TİD). The (descriptive and quantitative) analyses provided are based on a corpus of both spontaneous data and specifically elicited data, in order to provide as comprehensive an account as possible. We have found several devices in TİD for expression of multiple entities, in particular localization, spatial plural predicate inflection, and a specific form used to express multiple entities that are side by side in the same configuration (not reported for any other sign language to date), as well as numerals and quantifiers. In contrast to some other signed languages, TİD does not appear to have a productive system of plural reduplication. We argue that none of the devices encountered in the TİD data is a genuine plural marking device and that the plural interpretation of multiple entity localizations and plural predicate inflections is a by-product of the use of space to indicate the existence or the involvement in an event of multiple entities.

Share this page