Publications

Displaying 301 - 324 of 324
  • Van Wijk, C., & Kempen, G. (1982). Kost zinsbouw echt tijd? In R. Stuip, & W. Zwanenberg (Eds.), Handelingen van het zevenendertigste Nederlands Filologencongres (pp. 223-231). Amsterdam: APA-Holland University Press.
  • Van Valin Jr., R. D. (2013). Head-marking languages and linguistic theory. In B. Bickel, L. A. Grenoble, D. A. Peterson, & A. Timberlake (Eds.), Language typology and historical contingency: In honor of Johanna Nichols (pp. 91-124). Amsterdam: Benjamins.

    Abstract

    In her path-breaking 1986 paper, Johanna Nichols proposed a typological contrast between head-marking and dependent-marking languages. Nichols argues that even though the syntactic relations between the head and its dependents are the same in both types of language, the syntactic “bond” between them is not the same; in dependent-marking languages it is one of government, whereas in head-marking languages it is one of apposition. This distinction raises an important question for linguistic theory: How can this contrast – government versus apposition – which can show up in all of the major phrasal types in a language, be captured? The purpose of this paper is to explore the various approaches that have been taken in an attempt to capture the difference between head-marked and dependent-marked syntax in different linguistic theories. The basic problem that head-marking languages pose for syntactic theory will be presented, and then generative approaches will be discussed. The analysis of head-marked structure in Role and Reference Grammar will be presented
  • Van Valin Jr., R. D. (2013). Lexical representation, co-composition, and linking syntax and semantics. In J. Pustejovsky, P. Bouillon, H. Isahara, K. Kanzaki, & C. Lee (Eds.), Advances in generative lexicon theory (pp. 67-107). Dordrecht: Springer.
  • Van Heugten, M., Bergmann, C., & Cristia, A. (2015). The Effects of Talker Voice and Accent on Young Children's Speech Perception. In S. Fuchs, D. Pape, C. Petrone, & P. Perrier (Eds.), Individual Differences in Speech Production and Perception (pp. 57-88). Bern: Peter Lang.

    Abstract

    Within the first few years of life, children acquire many of the building blocks of their native language. This not only involves knowledge about the linguistic structure of spoken language, but also knowledge about the way in which this linguistic structure surfaces in their speech input. In this chapter, we review how infants and toddlers cope with differences between speakers and accents. Within the context of milestones in early speech perception, we examine how voice and accent characteristics are integrated during language processing, looking closely at the advantages and disadvantages of speaker and accent familiarity, surface-level deviation between two utterances, variability in the input, and prior speaker exposure. We conclude that although deviation from the child’s standard can complicate speech perception early in life, young listeners can overcome these additional challenges. This suggests that early spoken language processing is flexible and adaptive to the listening situation at hand.
  • Van Putten, S. (2013). The meaning of the Avatime additive particle tsye. In M. Balbach, L. Benz, S. Genzel, M. Grubic, A. Renans, S. Schalowski, M. Stegenwallner, & A. Zeldes (Eds.), Information structure: Empirical perspectives on theory (pp. 55-74). Potsdam: Universitätsverlag Potsdam. Retrieved from http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:de:kobv:517-opus-64804.
  • Verdonschot, R. G., & Tamaoka, K. (Eds.). (2015). The production of speech sounds across languages [Special Issue]. Japanese Psychological Research, 57(1).
  • Verhoef, T., Roberts, S. G., & Dingemanse, M. (2015). Emergence of systematic iconicity: Transmission, interaction and analogy. In D. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 2481-2486). Austin, Tx: Cognitive Science Society.

    Abstract

    Languages combine arbitrary and iconic signals. How do iconic signals emerge and when do they persist? We present an experimental study of the role of iconicity in the emergence of structure in an artificial language. Using an iterated communication game in which we control the signalling medium as well as the meaning space, we study the evolution of communicative signals in transmission chains. This sheds light on how affordances of the communication medium shape and constrain the mappability and transmissibility of form-meaning pairs. We find that iconic signals can form the building blocks for wider compositional patterns
  • Vernes, S. C., & Fisher, S. E. (2013). Genetic pathways implicated in speech and language. In S. Helekar (Ed.), Animal models of speech and language disorders (pp. 13-40). New York: Springer. doi:10.1007/978-1-4614-8400-4_2.

    Abstract

    Disorders of speech and language are highly heritable, providing strong
    support for a genetic basis. However, the underlying genetic architecture is complex,
    involving multiple risk factors. This chapter begins by discussing genetic loci associated
    with common multifactorial language-related impairments and goes on to
    detail the only gene (known as FOXP2) to be directly implicated in a rare monogenic
    speech and language disorder. Although FOXP2 was initially uncovered in
    humans, model systems have been invaluable in progressing our understanding of
    the function of this gene and its associated pathways in language-related areas of the
    brain. Research in species from mouse to songbird has revealed effects of this gene
    on relevant behaviours including acquisition of motor skills and learned vocalisations
    and demonstrated a role for Foxp2 in neuronal connectivity and signalling,
    particularly in the striatum. Animal models have also facilitated the identification of
    wider neurogenetic networks thought to be involved in language development and
    disorder and allowed the investigation of new candidate genes for disorders involving
    language, such as CNTNAP2 and FOXP1. Ongoing work in animal models promises
    to yield new insights into the genetic and neural mechanisms underlying human
    speech and language
  • von Stutterheim, C., & Flecken, M. (Eds.). (2013). Principles of information organization in L2 discourse [Special Issue]. International Review of Applied linguistics in Language Teaching (IRAL), 51(2).
  • Vosse, T., & Kempen, G. (1991). A hybrid model of human sentence processing: Parsing right-branching, center-embedded and cross-serial dependencies. In M. Tomita (Ed.), Proceedings of the Second International Workshop on Parsing Technologies.
  • Wanrooij, K., De Vos, J., & Boersma, P. (2015). Distributional vowel training may not be effective for Dutch adults. In Scottish consortium for ICPhS 2015, M. Wolters, J. Livingstone, B. Beattie, R. Smith, M. MacMahon, J. Stuart-Smith, & J. Scobbie (Eds.), Proceedings of the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow: University of Glasgow.

    Abstract

    Distributional vowel training for adults has been reported as “effective” for Spanish and Bulgarian learners of Dutch vowels, in studies using a behavioural task. A recent study did not yield a similar clear learning effect for Dutch learners of the English vowel contrast /æ/~/ε/, as measured with event-related potentials (ERPs). The present study aimed to examine the possibility that the latter result was related to the method. As in the ERP study, we tested whether distributional training improved Dutch adult learners’ perception of English /æ/~/ε/. However, we measured behaviour instead of ERPs, in a design identical to that used in the previous studies with Spanish learners. The results do not support an effect of distributional training and thus “replicate” the ERP study. We conclude that it remains unclear whether distributional vowel training is effective for Dutch adults.
  • Weber, A. (2000). Phonotactic and acoustic cues for word segmentation in English. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP 2000) (pp. 782-785).

    Abstract

    This study investigates the influence of both phonotactic and acoustic cues on the segmentation of spoken English. Listeners detected embedded English words in nonsense sequences (word spotting). Words aligned with phonotactic boundaries were easier to detect than words without such alignment. Acoustic cues to boundaries could also have signaled word boundaries, especially when word onsets lacked phonotactic alignment. However, only one of several durational boundary cues showed a marginally significant correlation with response times (RTs). The results suggest that word segmentation in English is influenced primarily by phonotactic constraints and only secondarily by acoustic aspects of the speech signal.
  • Weber, A. (2000). The role of phonotactics in the segmentation of native and non-native continuous speech. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP, Workshop on Spoken Word Access Processes. Nijmegen: MPI for Psycholinguistics.

    Abstract

    Previous research has shown that listeners make use of their knowledge of phonotactic constraints to segment speech into individual words. The present study investigates the influence of phonotactics when segmenting a non-native language. German and English listeners detected embedded English words in nonsense sequences. German listeners also had knowledge of English, but English listeners had no knowledge of German. Word onsets were either aligned with a syllable boundary or not, according to the phonotactics of the two languages. Words aligned with either German or English phonotactic boundaries were easier for German listeners to detect than words without such alignment. Responses of English listeners were influenced primarily by English phonotactic alignment. The results suggest that both native and non-native phonotactic constraints influence lexical segmentation of a non-native, but familiar, language.
  • Weissenborn, J., & Stralka, R. (1984). Das Verstehen von Mißverständnissen. Eine ontogenetische Studie. In Zeitschrift für Literaturwissenschaft und Linguistik (pp. 113-134). Stuttgart: Metzler.
  • Weissenborn, J. (1984). La genèse de la référence spatiale en langue maternelle et en langue seconde: similarités et différences. In G. Extra, & M. Mittner (Eds.), Studies in second language acquisition by adult immigrants (pp. 262-286). Tilburg: Tilburg University.
  • Willems, R. M. (2015). Cognitive neuroscience of natural language use: Introduction. In Cognitive neuroscience of natural language use (pp. 1-7). Cambridge: Cambridge University Press.
  • Windhouwer, M., Petro, J., Newskaya, I., Drude, S., Aristar-Dry, H., & Gippert, J. (2013). Creating a serialization of LMF: The experience of the RELISH project. In G. Francopoulo (Ed.), LMF - Lexical Markup Framework (pp. 215-226). London: Wiley.
  • Windhouwer, M., & Wright, S. E. (2013). LMF and the Data Category Registry: Principles and application. In G. Francopoulo (Ed.), LMF: Lexical Markup Framework (pp. 41-50). London: Wiley.
  • Wittenburg, P., & Ringersma, J. (2013). Metadata description for lexicons. In R. H. Gouws, U. Heid, W. Schweickard, & H. E. Wiegand (Eds.), Dictionaries: An international encyclopedia of lexicography: Supplementary volume: Recent developments with focus on electronic and computational lexicography (pp. 1329-1335). Berlin: Mouton de Gruyter.
  • Wright, S. E., Windhouwer, M., Schuurman, I., & Kemps-Snijders, M. (2013). Community efforts around the ISOcat Data Category Registry. In I. Gurevych, & J. Kim (Eds.), The People's Web meets NLP: Collaboratively constructed language resources (pp. 349-374). New York: Springer.

    Abstract

    The ISOcat Data Category Registry provides a community computing environment for creating, storing, retrieving, harmonizing and standardizing data category specifications (DCs), used to register linguistic terms used in various fields. This chapter recounts the history of DC documentation in TC 37, beginning from paper-based lists created for lexicographers and terminologists and progressing to the development of a web-based resource for a much broader range of users. While describing the considerable strides that have been made to collect a very large comprehensive collection of DCs, it also outlines difficulties that have arisen in developing a fully operative web-based computing environment for achieving consensus on data category names, definitions, and selections and describes efforts to overcome some of the present shortcomings and to establish positive working procedures designed to engage a wide range of people involved in the creation of language resources.
  • Zavala, R. (2000). Multiple classifier systems in Akatek (Mayan). In G. Senft (Ed.), Systems of nominal classification (pp. 114-146). Cambridge University Press.
  • Zhang, Y., Yurovsky, D., & Yu, C. (2015). Statistical word learning is a continuous process: Evidence from the human simulation paradigm. In D. Noelle, R. Dale, A. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci 2015) (pp. 2422-2427). Austin: Cognitive Science Society.

    Abstract

    In the word-learning domain, both adults and young children are able to find the correct referent of a word from highly ambiguous contexts that involve many words and objects by computing distributional statistics across the co-occurrences of words and referents at multiple naming moments (Yu & Smith, 2007; Smith & Yu, 2008). However, there is still debate regarding how learners accumulate distributional information to learn object labels in natural learning environments, and what underlying learning mechanism learners are most likely to adopt. Using the Human Simulation Paradigm (Gillette, Gleitman, Gleitman & Lederer, 1999), we found that participants’ learning performance gradually improved and that their ability to remember and carry over partial knowledge from past learning instances facilitated subsequent learning. These results support the statistical learning model that word learning is a continuous process.
  • De Zubicaray, G. I., Acheson, D. J., & Hartsuiker, R. J. (Eds.). (2013). Mind what you say - general and specific mechanisms for monitoring in speech production [Research topic] [Special Issue]. Frontiers in Human Neuroscience. Retrieved from http://www.frontiersin.org/human_neuroscience/researchtopics/mind_what_you_say_-_general_an/1197.

    Abstract

    Psycholinguistic research has typically portrayed speech production as a relatively automatic process. This is because when errors are made, they occur as seldom as one in every thousand words we utter. However, it has long been recognised that we need some form of control over what we are currently saying and what we plan to say. This capacity to both monitor our inner speech and self-correct our speech output has often been assumed to be a property of the language comprehension system. More recently, it has been demonstrated that speech production benefits from interfacing with more general cognitive processes such as selective attention, short-term memory (STM) and online response monitoring to resolve potential conflict and successfully produce the output of a verbal plan. The conditions and levels of representation according to which these more general planning, monitoring and control processes are engaged during speech production remain poorly understood. Moreover, there remains a paucity of information about their neural substrates, despite some of the first evidence of more general monitoring having come from electrophysiological studies of error related negativities (ERNs). While aphasic speech errors continue to be a rich source of information, there has been comparatively little research focus on instances of speech repair. The purpose of this Frontiers Research Topic is to provide a forum for researchers to contribute investigations employing behavioural, neuropsychological, electrophysiological, neuroimaging and virtual lesioning techniques. In addition, while the focus of the research topic is on novel findings, we welcome submission of computational simulations, review articles and methods papers.
  • Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Expression of multiple entities in Turkish Sign Language (TİD). In E. Arik (Ed.), Current Directions in Turkish Sign Language Research (pp. 272-302). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    This paper reports on an exploration of the ways in which multiple entities are expressed in Turkish Sign Language (TİD). The (descriptive and quantitative) analyses provided are based on a corpus of both spontaneous data and specifically elicited data, in order to provide as comprehensive an account as possible. We have found several devices in TİD for expression of multiple entities, in particular localization, spatial plural predicate inflection, and a specific form used to express multiple entities that are side by side in the same configuration (not reported for any other sign language to date), as well as numerals and quantifiers. In contrast to some other signed languages, TİD does not appear to have a productive system of plural reduplication. We argue that none of the devices encountered in the TİD data is a genuine plural marking device and that the plural interpretation of multiple entity localizations and plural predicate inflections is a by-product of the use of space to indicate the existence or the involvement in an event of multiple entities.

Share this page