Publications

Displaying 601 - 683 of 683
  • Ten Bosch, L., Hämäläinen, A., & Ernestus, M. (2011). Assessing acoustic reduction: Exploiting local structure in speech. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 2665-2668).

    Abstract

    This paper presents a method to quantify the spectral characteristics of reduction in speech. Hämäläinen et al. (2009) proposes a measure of spectral reduction which is able to predict a substantial amount of the variation in duration that linguistically motivated variables do not account for. In this paper, we continue studying acoustic reduction in speech by developing a new acoustic measure of reduction, based on local manifold structure in speech. We show that this measure yields significantly improved statistical models for predicting variation in duration.
  • Ten Bosch, L., Giezenaar, G., Boves, L., & Ernestus, M. (2016). Modeling language-learners' errors in understanding casual speech. In G. Adda, V. Barbu Mititelu, J. Mariani, D. Tufiş, & I. Vasilescu (Eds.), Errors by humans and machines in multimedia, multimodal, multilingual data processing. Proceedings of Errare 2015 (pp. 107-121). Bucharest: Editura Academiei Române.

    Abstract

    In spontaneous conversations, words are often produced in reduced form compared to formal careful speech. In English, for instance, ’probably’ may be pronounced as ’poly’ and ’police’ as ’plice’. Reduced forms are very common, and native listeners usually do not have any problems with interpreting these reduced forms in context. Non-native listeners, however, have great difficulties in comprehending reduced forms. In order to investigate the problems in comprehension that non-native listeners experience, a dictation experiment was conducted in which sentences were presented auditorily to non-natives either in full (unreduced) or reduced form. The types of errors made by the L2 listeners reveal aspects of the cognitive processes underlying this dictation task. In addition, we compare the errors made by these human participants with the type of word errors made by DIANA, a recently developed computational model of word comprehension.
  • Ten Bosch, L., Boves, L., & Ernestus, M. (2013). Towards an end-to-end computational model of speech comprehension: simulating a lexical decision task. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 2822-2826).

    Abstract

    This paper describes a computational model of speech comprehension that takes the acoustic signal as input and predicts reaction times as observed in an auditory lexical decision task. By doing so, we explore a new generation of end-to-end computational models that are able to simulate the behaviour of human subjects participating in a psycholinguistic experiment. So far, nearly all computational models of speech comprehension do not start from the speech signal itself, but from abstract representations of the speech signal, while the few existing models that do start from the acoustic signal cannot directly model reaction times as obtained in comprehension experiments. The main functional components in our model are the perception stage, which is compatible with the psycholinguistic model Shortlist B and is implemented with techniques from automatic speech recognition, and the decision stage, which is based on the linear ballistic accumulation decision model. We successfully tested our model against data from 20 participants performing a largescale auditory lexical decision experiment. Analyses show that the model is a good predictor for the average judgment and reaction time for each word.
  • Terrill, A. (2010). Complex predicates and complex clauses in Lavukaleve. In J. Bowden, N. P. Himmelman, & M. Ross (Eds.), A journey through Austronesian and Papuan linguistic and cultural space: Papers in honour of Andrew K. Pawley (pp. 499-512). Canberra: Pacific Linguistics.
  • Terrill, A. (2011). Limits of the substrate: Substrate grammatical influence in Solomon Islands Pijin. In C. Lefebvre (Ed.), Creoles, their substrates, and language typology (pp. 513-529). Amsterdam: John Benjamins.

    Abstract

    What grammatical elements of a substrate language find their way into a creole? Grammatical features of the Oceanic substrate languages have been shown to be crucial in the development of Solomon Islands Pijin and of Melanesian Pidgin as a whole (Keesing 1988), so one might expect constructions which are very stable in the Oceanic family of languages to show up as substrate influence in the creole. This paper investigates three constructions in Oceanic languages which have been stable over thousands of years and persist throughout a majority of the Oceanic languages spoken in the Solomon Islands. The paper asks whether these are the sorts of constructions which could be expected to be reflected in Solomon Islands Pijin and shows that none of these persistent constructions appears in Solomon Islands Pijin at all. The absence of these constructions in Solomon Islands Pijin could be due to simplification: Creole genesis involves simplification of the substrate grammars. However, while simplification could be the explanation, it is not necessarily the case that all complex structures become simplified. For instance Solomon Islands Pijin pronoun paradigms are more complex than those in English, but the complexity is similar to that of the substrate languages. Thus it is not the case that all areas of a creole language are necessarily simplified. One must therefore look further than just simplification for an explanation of the presence or absence of stable grammatical features deriving from the substrate in creole languages. An account based on constraints in specific domains (Siegel 1999) is a better predictor of the behaviour of substrate constructions in Solomon Islands Pijin.
  • Thompson-Schill, S., Hagoort, P., Dominey, P. F., Honing, H., Koelsch, S., Ladd, D. R., Lerdahl, F., Levinson, S. C., & Steedman, M. (2013). Multiple levels of structure in language and music. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 289-303). Cambridge, MA: MIT Press.

    Abstract

    A forum devoted to the relationship between music and language begins with an implicit assumption: There is at least one common principle that is central to all human musical systems and all languages, but that is not characteristic of (most) other domains. Why else should these two categories be paired together for analysis? We propose that one candidate for a common principle is their structure. In this chapter, we explore the nature of that structure—and its consequences for psychological and neurological processing mechanisms—within and across these two domains.
  • Tice, M., & Henetz, T. (2011). Turn-boundary projection: Looking ahead. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 838-843). Austin, TX: Cognitive Science Society.

    Abstract

    Coordinating with others is hard; and yet we accomplish this every day when we take turns in a conversation. How do we do this? The present study introduces a new method of measuring turn-boundary projection that enables researchers to achieve more valid, flexible, and temporally informative data on online turn projection: tracking an observer’s gaze from the current speaker to the next speaker. In this preliminary investigation, participants consistently looked at the current speaker during their turn. Additionally, they looked to the next speaker before her turn began, and sometimes even before the current speaker finished speaking. This suggests that observer gaze is closely aligned with perceptual processes of turn-boundary projection, and thus may equip the field with the tools to explore how we manage to take turns.
  • Timmer, K., Ganushchak, L. Y., Mitlina, Y., & Schiller, N. O. (2013). Choosing first or second language phonology in 125 ms [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 164.

    Abstract

    We are often in a bilingual situation (e.g., overhearing a conversation in the train). We investigated whether first (L1) and second language (L2) phonologies are automatically activated. A masked priming paradigm was used, with Russian words as targets and either Russian or English words as primes. Event-related potentials (ERPs) were recorded while Russian (L1) – English (L2) bilinguals read aloud L1 target words (e.g. РЕЙС /reis/ ‘fl ight’) primed with either L1 (e.g. РАНА /rana/ ‘wound’) or L2 words (e.g. PACK). Target words were read faster when they were preceded by phonologically related L1 primes but not by orthographically related L2 primes. ERPs showed orthographic priming in the 125-200 ms time window. Thus, both L1 and L2 phonologies are simultaneously activated during L1 reading. The results provide support for non-selective models of bilingual reading, which assume automatic activation of the non-target language phonology even when it is not required by the task.
  • Torreira, F., & Ernestus, M. (2010). Phrase-medial vowel devoicing in spontaneous French. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 2006-2009).

    Abstract

    This study investigates phrase-medial vowel devoicing in European French (e.g. /ty po/ [typo] 'you can'). Our spontaneous speech data confirm that French phrase-medial devoicing is a frequent phenomenon affecting high vowels preceded by voiceless consonants. We also found that devoicing is more frequent in temporally reduced and coarticulated vowels. Complete and partial devoicing were conditioned by the same variables (speech rate, consonant type and distance from the end of the AP). Given these results, we propose that phrase-medial vowel devoicing in French arises mainly from the temporal compression of vocalic gestures and the aerodynamic conditions imposed by high vowels.
  • Torreira, F., & Ernestus, M. (2010). The Nijmegen corpus of casual Spanish. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, & D. Tapias (Eds.), Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC'10) (pp. 2981-2985). Paris: European Language Resources Association (ELRA).

    Abstract

    This article describes the preparation, recording and orthographic transcription of a new speech corpus, the Nijmegen Corpus of Casual Spanish (NCCSp). The corpus contains around 30 hours of recordings of 52 Madrid Spanish speakers engaged in conversations with friends. Casual speech was elicited during three different parts, which together provided around ninety minutes of speech from every group of speakers. While Parts 1 and 2 did not require participants to perform any specific task, in Part 3 participants negotiated a common answer to general questions about society. Information about how to obtain a copy of the corpus can be found online at http://mirjamernestus.ruhosting.nl/Ernestus/NCCSp
  • Trilsbeek, P., & Windhouwer, M. (2016). FLAT: A CLARIN-compatible repository solution based on Fedora Commons. In Proceedings of the CLARIN Annual Conference 2016. Clarin ERIC.

    Abstract

    This paper describes the development of a CLARIN-compatible repository solution that fulfils
    both the long-term preservation requirements as well as the current day discoverability and usability
    needs of an online data repository of language resources. The widely used Fedora Commons
    open source repository framework, combined with the Islandora discovery layer, forms
    the basis of the solution. On top of this existing solution, additional modules and tools are developed
    to make it suitable for the types of data and metadata that are used by the participating
    partners.

    Additional information

    link to pdf on CLARIN site
  • Tschöpel, S., Schneider, D., Bardeli, R., Schreer, O., Masneri, S., Wittenburg, P., Sloetjes, H., Lenkiewicz, P., & Auer, E. (2011). AVATecH: Audio/Video technology for humanities research. In C. Vertan, M. Slavcheva, P. Osenova, & S. Piperidis (Eds.), Proceedings of the Workshop on Language Technologies for Digital Humanities and Cultural Heritage, Hissar, Bulgaria, 16 September 2011 (pp. 86-89). Shoumen, Bulgaria: Incoma Ltd.

    Abstract

    In the AVATecH project the Max-Planck Institute for Psycholinguistics (MPI) and the Fraunhofer institutes HHI and IAIS aim to significantly speed up the process of creating annotations of audio-visual data for humanities research. For this we integrate state-of-theart audio and video pattern recognition algorithms into the widely used ELAN annotation tool. To address the problem of heterogeneous annotation tasks and recordings we provide modular components extended by adaptation and feedback mechanisms to achieve competitive annotation quality within significantly less annotation time. Currently we are designing a large-scale end-user evaluation of the project.
  • Tuinman, A., & Cutler, A. (2010). Casual speech processes: L1 knowledge and L2 speech perception. In K. Dziubalska-Kołaczyk, M. Wrembel, & M. Kul (Eds.), Proceedings of the 6th International Symposium on the Acquisition of Second Language Speech, New Sounds 2010, Poznań, Poland, 1-3 May 2010 (pp. 512-517). Poznan: Adama Mickiewicz University.

    Abstract

    Every language manifests casual speech processes, and hence every second language too. This study examined how listeners deal with second-language casual speech processes, as a function of the processes in their native language. We compared a match case, where a second-language process t/-reduction) is also operative in native speech, with a mismatch case, where a second-language process (/r/-insertion) is absent from native speech. In each case native and non-native listeners judged stimuli in which a given phoneme (in sentence context) varied along a continuum from absent to present. Second-language listeners in general mimicked native performance in the match case, but deviated significantly from native performance in the mismatch case. Together these results make it clear that the mapping from first to second language is as important in the interpretation of casual speech processes as in other dimensions of speech perception. Unfamiliar casual speech processes are difficult to adapt to in a second language. Casual speech processes that are already familiar from native speech, however, are easy to adapt to; indeed, our results even suggest that it is possible for subtle difference in their occurrence patterns across the two languages to be detected,and to be accommodated to in second-language listening.
  • Tuinman, A., & Cutler, A. (2011). L1 knowledge and the perception of casual speech processes in L2. In M. Wrembel, M. Kul, & K. Dziubalska-Kolaczyk (Eds.), Achievements and perspectives in SLA of speech: New Sounds 2010. Volume I (pp. 289-301). Frankfurt am Main: Peter Lang.

    Abstract

    Every language manifests casual speech processes, and hence every second language too. This study examined how listeners deal with second-language casual speech processes, as a function of the processes in their native language. We compared a match case, where a second-language process t/-reduction) is also operative in native speech, with a mismatch case, where a second-language process (/r/-insertion) is absent from native speech. In each case native and non-native listeners judged stimuli in which a given phoneme (in sentence context) varied along a continuum from absent to present. Second-language listeners in general mimicked native performance in the match case, but deviated significantly from native performance in the mismatch case. Together these results make it clear that the mapping from first to second language is as important in the interpretation of casual speech processes as in other dimensions of speech perception. Unfamiliar casual speech processes are difficult to adapt to in a second language. Casual speech processes that are already familiar from native speech, however, are easy to adapt to; indeed, our results even suggest that it is possible for subtle difference in their occurrence patterns across the two languages to be detected,and to be accommodated to in second-language listening
  • Tuinman, A., Mitterer, H., & Cutler, A. (2011). The efficiency of cross-dialectal word recognition. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 153-156).

    Abstract

    Dialects of the same language can differ in the casual speech processes they allow; e.g., British English allows the insertion of [r] at word boundaries in sequences such as saw ice, while American English does not. In two speeded word recognition experiments, American listeners heard such British English sequences; in contrast to non-native listeners, they accurately perceived intended vowel-initial words even with intrusive [r]. Thus despite input mismatches, cross-dialectal word recognition benefits from the full power of native-language processing.
  • Turco, G., Gubian, M., & Schertz, J. (2011). A quantitative investigation of the prosody of Verum Focus in Italian. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 961-964).

    Abstract

    prosodic marking of Verum focus (VF) in Italian, which is said to be realized with a pitch accent on the finite verb (e.g. A: Paul has not eaten the banana - B: (No), Paul HAS eaten the banana!). We tried to discover whether and how Italian speakers prosodically mark VF when producing full-fledged sentences using a semi-spontaneous production experiment on 27 speakers. Speech rate and f0 contours were extracted using automatic data processing tools and were subsequently analysed using Functional Data Analysis (FDA), which allowed for automatic visualization of patterns in the contour shapes. Our results show that the postfocal region of VF sentences exhibit faster speech rate and lower f0 compared to non-VF cases. However, an expected consistent difference of f0 effect on the focal region of the VF sentence was not found in this analysis.
  • Ünal, E., & Papafragou, A. (2013). Linguistic and conceptual representations of inference as a knowledge source. In S. Baiz, N. Goldman, & R. Hawkes (Eds.), Proceedings of the 37th Annual Boston University Conference on Language Development (BUCLD 37) (pp. 433-443). Boston: Cascadilla Press.
  • Van Turennout, M., Schmitt, B., & Hagoort, P. (2003). When words come to mind: Electrophysiological insights on the time course of speaking and understanding words. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 241-278). Berlin: Mouton de Gruyter.
  • van Staden, M., & Majid, A. (2003). Body colouring task 2003. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 66-68). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877666.

    Abstract

    This Field Manual entry has been superceded by the published version: Van Staden, M., & Majid, A. (2006). Body colouring task. Language Sciences, 28(2-3), 158-161. doi:10.1016/j.langsci.2005.11.004.

    Additional information

    2003_body_model_large.pdf

    Files private

    Request files
  • Van Rees Vellinga, M., Hanulikova, A., Weber, A., & Zwitserlood, P. (2010). A neurophysiological investigation of processing phoneme substitutions in L2. In New Sounds 2010: Sixth International Symposium on the Acquisition of Second Language Speech (pp. 518-523). Poznan, Poland: Adam Mickiewicz University.
  • Van der Meij, L., Isaac, A., & Zinn, C. (2010). A web-based repository service for vocabularies and alignments in the cultural heritage domain. In L. Aroyo, G. Antoniou, E. Hyvönen, A. Ten Teije, H. Stuckenschmidt, L. Cabral, & T. Tudorache (Eds.), The Semantic Web: Research and Applications. 7th Extended Semantic Web Conference, Proceedings, Part I (pp. 394-409). Heidelberg: Springer.

    Abstract

    Controlled vocabularies of various kinds (e.g., thesauri, classification schemes) play an integral part in making Cultural Heritage collections accessible. The various institutions participating in the Dutch CATCH programme maintain and make use of a rich and diverse set of vocabularies. This makes it hard to provide a uniform point of access to all collections at once. Our SKOS-based vocabulary and alignment repository aims at providing technology for managing the various vocabularies, and for exploiting semantic alignments across any two of them. The repository system exposes web services that effectively support the construction of tools for searching and browsing across vocabularies and collections or for collection curation (indexing), as we demonstrate.
  • Van Hout, A., Veenstra, A., & Berends, S. (2011). All pronouns are not acquired equally in Dutch: Elicitation of object and quantitative pronouns. In M. Pirvulescu, M. C. Cuervo, A. T. Pérez-Leroux, J. Steele, & N. Strik (Eds.), Selected proceedings of the 4th Conference on Generative Approaches to Language Acquisition North America (GALANA 2010) (pp. 106-121). Somerville, MA: Cascadilla Proceedings Project.

    Abstract

    This research reports the results of eliciting pronouns in two syntactic environments: Object pronouns and quantitative er (Q-er). Thus another type of language is added to the literature on subject and object clitic acquisition in the Romance languages (Jakubowicz et al., 1998; Hamann et al., 1996). Quantitative er is a unique pronoun in the Germanic languages; it has the same distribution as partitive clitics in Romance. Q-er is an N'-anaphor and occurs obligatorily with headless noun phrases with a numeral or weak quantifier. Q-er is licensed only when the context offers an antecedent; it binds an empty position in the NP. Data from typically-developing children aged 5;0-6;0 show that object and Q-er pronouns are not acquired equally; it is proposed that this is due to their different syntax. The use of Q-er involves more sophisticated syntactic knowledge: Q-er occurs at the left edge of the VP and binds an empty position in the NP, whereas object pronouns are simply stand-ins for full NPs and occur in the same position. These Dutch data reveal that pronouns are not used as exclusively as object clitics are in the Romance languages (Varlakosta, in prep.).
  • Van Gerven, M., & Simanova, I. (2010). Concept classification with Bayesian multi-task learning. In Proceedings of the NAACL HLT 2010 First Workshop on Computational Neurolinguistics (pp. 10-17). Los Angeles: Association for Computational Linguistics.

    Abstract

    Multivariate analysis allows decoding of single trial data in individual subjects. Since different models are obtained for each subject it becomes hard to perform an analysis on the group level. We introduce a new algorithm for Bayesian multi-task learning which imposes a coupling between single-subject models. Using
    the CMU fMRI dataset it is shown that the algorithm can be used for concept classification
    based on the average activation of regions in the AAL atlas. Concepts which were most easily classified correspond to the categories shelter,manipulation and eating, which is in accordance with the literature. The multi-task learning algorithm is shown to find regions of interest that are common to all subjects which
    therefore facilitates interpretation of the obtained
    models.
  • Van Valin Jr., R. D. (2016). An overview of information structure in three Amazonian languages. In M. Fernandez-Vest, & R. D. Van Valin Jr. (Eds.), Information structure and spoken language from a cross-linguistic perspective (pp. 77-92). Berlin: Mouton de Gruyter.
  • Van Valin Jr., R. D. (2003). Minimalism and explanation. In J. Moore, & M. Polinsky (Eds.), The nature of explanation in linguistic theory (pp. 281-297). University of Chicago Press.
  • Van Gijn, R. (2011). Multi-verb constructions in Yurakaré. In A. Y. Aikhenvald, & P. C. Muysken (Eds.), Multi-verb constructions: A view from the Americas (pp. 255-282). Leiden: Brill.
  • Van Valin Jr., R. D. (2013). Head-marking languages and linguistic theory. In B. Bickel, L. A. Grenoble, D. A. Peterson, & A. Timberlake (Eds.), Language typology and historical contingency: In honor of Johanna Nichols (pp. 91-124). Amsterdam: Benjamins.

    Abstract

    In her path-breaking 1986 paper, Johanna Nichols proposed a typological contrast between head-marking and dependent-marking languages. Nichols argues that even though the syntactic relations between the head and its dependents are the same in both types of language, the syntactic “bond” between them is not the same; in dependent-marking languages it is one of government, whereas in head-marking languages it is one of apposition. This distinction raises an important question for linguistic theory: How can this contrast – government versus apposition – which can show up in all of the major phrasal types in a language, be captured? The purpose of this paper is to explore the various approaches that have been taken in an attempt to capture the difference between head-marked and dependent-marked syntax in different linguistic theories. The basic problem that head-marking languages pose for syntactic theory will be presented, and then generative approaches will be discussed. The analysis of head-marked structure in Role and Reference Grammar will be presented
  • Van Valin Jr., R. D. (2013). Lexical representation, co-composition, and linking syntax and semantics. In J. Pustejovsky, P. Bouillon, H. Isahara, K. Kanzaki, & C. Lee (Eds.), Advances in generative lexicon theory (pp. 67-107). Dordrecht: Springer.
  • Van Valin Jr., R. D. (1994). Extraction restrictions, competing theories and the argument from the poverty of the stimulus. In S. D. Lima, R. Corrigan, & G. K. Iverson (Eds.), The reality of linguistic rules (pp. 243-259). Amsterdam: Benjamins.
  • Van Geenhoven, V. (1998). On the Argument Structure of some Noun Incorporating Verbs in West Greenlandic. In M. Butt, & W. Geuder (Eds.), The Projection of Arguments - Lexical and Compositional Factors (pp. 225-263). Stanford, CA, USA: CSLI Publications.
  • Van Hout, A., & Veenstra, A. (2010). Telicity marking in Dutch child language: Event realization or no aspectual coercion? In J. Costa, A. Castro, M. Lobo, & F. Pratas (Eds.), Language Acquisition and Development: Proceedings of GALA 2009 (pp. 216-228). Newcastle upon Tyne: Cambridge Scholars Publishing.
  • Van Valin Jr., R. D. (1998). The acquisition of WH-questions and the mechanisms of language acquisition. In M. Tomasello (Ed.), The new psychology of language: Cognitive and functional approaches to language structure (pp. 221-249). Mahwah, New Jersey: Erlbaum.
  • Van Gijn, R., Haude, K., & Muysken, P. (2011). Subordination in South America: An overview. In R. Van Gijn, K. Haude, & P. Muysken (Eds.), Subordination in native South-American languages (pp. 1-24). Amsterdam: Benjamins.
  • Van Valin Jr., R. D. (2010). Role and reference grammar as a framework for linguistic analysis. In B. Heine, & H. Narrog (Eds.), The Oxford handbook of linguistic analysis (pp. 703-738). Oxford: Oxford University Press.
  • Van Gijn, R. (2011). Semantic and grammatical integration in Yurakaré subordination. In R. Van Gijn, K. Haude, & P. Muysken (Eds.), Subordination in native South-American languages (pp. 169-192). Amsterdam: Benjamins.

    Abstract

    Yurakaré (unclassified, central Bolivia) has five subordination strategies (on the basis of a morphosyntactic definition). In this paper I argue that the use of these different strategies is conditioned by the degree of conceptual synthesis of the two events, relating to temporal integration and participant integration. The most integrated events are characterized by shared time reference; morphosyntactically they are serial verb constructions, with syntactically fused predicates. The other constructions are characterized by less grammatical integration, which correlates either with a low degree of temporal integration of the dependent predicate and the main predicate, or with participant discontinuity.
  • Van de Ven, M., Tucker, B. V., & Ernestus, M. (2010). Semantic facilitation in bilingual everyday speech comprehension. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (Interspeech 2010), Makuhari, Japan (pp. 1245-1248).

    Abstract

    Previous research suggests that bilinguals presented with low and high predictability sentences benefit from semantics in clear but not in conversational speech [1]. In everyday speech, however, many words are not highly predictable. Previous research has shown that native listeners can use also more subtle semantic contextual information [2]. The present study reports two auditory lexical decision experiments investigating to what extent late Asian-English bilinguals benefit from subtle semantic cues in their processing of English unreduced and reduced speech. Our results indicate that these bilinguals are less sensitive to semantic cues than native listeners for both speech registers.
  • Van Putten, S. (2013). The meaning of the Avatime additive particle tsye. In M. Balbach, L. Benz, S. Genzel, M. Grubic, A. Renans, S. Schalowski, M. Stegenwallner, & A. Zeldes (Eds.), Information structure: Empirical perspectives on theory (pp. 55-74). Potsdam: Universitätsverlag Potsdam. Retrieved from http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:de:kobv:517-opus-64804.
  • Van Uytvanck, D., Zinn, C., Broeder, D., Wittenburg, P., & Gardelleni, M. (2010). Virtual language observatory: The portal to the language resources and technology universe. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 900-903). European Language Resources Association (ELRA).

    Abstract

    Over the years, the field of Language Resources and Technology (LRT) hasdeveloped a tremendous amount of resources and tools. However, there is noready-to-use map that researchers could use to gain a good overview andsteadfast orientation when searching for, say corpora or software tools tosupport their studies. It is rather the case that information is scatteredacross project- or organisation-specific sites, which makes it hard if notimpossible for less-experienced researchers to gather all relevant material.Clearly, the provision of metadata is central to resource and softwareexploration. However, in the LRT field, metadata comes in many forms, tastesand qualities, and therefore substantial harmonization and curation efforts arerequired to provide researchers with metadata-based guidance. To address thisissue a broad alliance of LRT providers (CLARIN, the Linguist List, DOBES,DELAMAN, DFKI, ELRA) have initiated the Virtual Language Observatory portal toprovide a low-barrier, easy-to-follow entry point to language resources andtools; it can be accessed via http://www.clarin.eu/vlo
  • Vapnarsky, V., & Le Guen, O. (2011). The guardians of space: Understanding ecological and historical relations of the contemporary Yucatec Mayas to their landscape. In C. Isendahl, & B. Liljefors Persson (Eds.), Ecology, Power, and Religion in Maya Landscapes: Proceedings of the 11th European Maya Conference. Acta Mesoamericano. vol. 23. Markt Schwaben: Saurwein.
  • Vernes, S. C., & Fisher, S. E. (2011). Functional genomic dissection of speech and language disorders. In J. D. Clelland (Ed.), Genomics, proteomics, and the nervous system (pp. 253-278). New York: Springer.

    Abstract

    Mutations of the human FOXP2 gene have been shown to cause severe difficulties in learning to make coordinated sequences of articulatory gestures that underlie speech (developmental verbal dyspraxia or DVD). Affected individuals are impaired in multiple aspects of expressive and receptive linguistic processing and ­display abnormal grey matter volume and functional activation patterns in cortical and subcortical brain regions. The protein encoded by FOXP2 belongs to a divergent subgroup of forkhead-box transcription factors, with a distinctive DNA-binding domain and motifs that mediate hetero- and homodimerization. This chapter describes the successful use of FOXP2 as a unique molecular window into neurogenetic pathways that are important for speech and language development, adopting several complementary strategies. These include direct functional investigations of FOXP2 splice variants and the effects of etiological mutations. FOXP2’s role as a transcription factor also enabled the development of functional genomic routes for dissecting neurogenetic mechanisms that may be relevant for speech and language. By identifying downstream target genes regulated by FOXP2, it was possible to identify common regulatory themes in modulating synaptic plasticity, neurodevelopment, and axon guidance. These targets represent novel entrypoints into in vivo pathways that may be disturbed in speech and language disorders. The identification of FOXP2 target genes has also led to the discovery of a shared neurogenetic pathway between clinically distinct language disorders; the rare Mendelian form of DVD and a complex and more common form of language ­disorder known as Specific Language Impairment.

    Files private

    Request files
  • Vernes, S. C., & Fisher, S. E. (2013). Genetic pathways implicated in speech and language. In S. Helekar (Ed.), Animal models of speech and language disorders (pp. 13-40). New York: Springer. doi:10.1007/978-1-4614-8400-4_2.

    Abstract

    Disorders of speech and language are highly heritable, providing strong
    support for a genetic basis. However, the underlying genetic architecture is complex,
    involving multiple risk factors. This chapter begins by discussing genetic loci associated
    with common multifactorial language-related impairments and goes on to
    detail the only gene (known as FOXP2) to be directly implicated in a rare monogenic
    speech and language disorder. Although FOXP2 was initially uncovered in
    humans, model systems have been invaluable in progressing our understanding of
    the function of this gene and its associated pathways in language-related areas of the
    brain. Research in species from mouse to songbird has revealed effects of this gene
    on relevant behaviours including acquisition of motor skills and learned vocalisations
    and demonstrated a role for Foxp2 in neuronal connectivity and signalling,
    particularly in the striatum. Animal models have also facilitated the identification of
    wider neurogenetic networks thought to be involved in language development and
    disorder and allowed the investigation of new candidate genes for disorders involving
    language, such as CNTNAP2 and FOXP1. Ongoing work in animal models promises
    to yield new insights into the genetic and neural mechanisms underlying human
    speech and language
  • Versteegh, M., Ten Bosch, L., & Boves, L. (2010). Active word learning under uncertain input conditions. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 2930-2933). ISCA.

    Abstract

    This paper presents an analysis of phoneme durations of emotional speech in two languages: Dutch and Korean. The analyzed corpus of emotional speech has been specifically developed for the purpose of cross-linguistic comparison, and is more balanced than any similar corpus available so far: a) it contains expressions by both Dutch and Korean actors and is based on judgments by both Dutch and Korean listeners; b) the same elicitation technique and recording procedure were used for recordings of both languages; and c) the phonetics of the carrier phrase were constructed to be permissible in both languages. The carefully controlled phonetic content of the carrier phrase allows for analysis of the role of specific phonetic features, such as phoneme duration, in emotional expression in Dutch and Korean. In this study the mutual effect of language and emotion on phoneme duration is presented.
  • Versteegh, M., Ten Bosch, L., & Boves, L. (2010). Dealing with uncertain input in word learning. In Proceedings of the IXth IEEE International Conference on Development and Learning (ICDL). Ann Arbor, MI, 18-21 Aug. 2010 (pp. 46-51). IEEE.

    Abstract

    In this paper we investigate a computational model of word learning, that is embedded in a cognitively and ecologically plausible framework. Multi-modal stimuli from four different speakers form a varied source of experience. The model incorporates active learning, attention to a communicative setting and clarity of the visual scene. The model's ability to learn associations between speech utterances and visual concepts is evaluated during training to investigate the influence of active learning under conditions of uncertain input. The results show the importance of shared attention in word learning and the model's robustness against noise.
  • Versteegh, M., Ten Bosch, L., & Boves, L. (2011). Modelling novelty preference in word learning. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 761-764).

    Abstract

    This paper investigates the effects of novel words on a cognitively plausible computational model of word learning. The model is first familiarized with a set of words, achieving high recognition scores and subsequently offered novel words for training. We show that the model is able to recognize the novel words as different from the previously seen words, based on a measure of novelty that we introduce. We then propose a procedure analogous to novelty preference in infants. Results from simulations of word learning show that adding this procedure to our model speeds up training and helps the model attain higher recognition rates.
  • Versteegh, M., Sangati, F., & Zuidema, W. (2010). Simulations of socio-linguistic change: Implications for unidirectionality. In A. Smith, M. Schoustra, B. Boer, & K. Smith (Eds.), Proceedings of the 8th International conference on the Evolution of Language (EVOLANG 8) (pp. 511-512). World Scientific Publishing.
  • Verweij, H., Windhouwer, M., & Wittenburg, P. (2011). Knowledge management for small languages. In V. Luzar-Stiffler, I. Jarec, & Z. Bekic (Eds.), Proceedings of the ITI 2011 33rd Int. Conf. on Information Technology Interfaces, June 27-30, 2011, Cavtat, Croatia (pp. 213-218). Zagreb, Croatia: University Computing Centre, University of Zagreb.

    Abstract

    In this paper an overview of the knowledge components needed for extensive documentation of small languages is given. The Language Archive is striving to offer all these tools to the linguistic community. The major tools in relation to the knowledge components are described. Followed by a discussion on what is currently lacking and possible strategies to move forward.
  • Virpioja, S., Lehtonen, M., Hulten, A., Salmelin, R., & Lagus, K. (2011). Predicting reaction times in word recognition by unsupervised learning of morphology. In W. Honkela, W. Dutch, M. Girolami, & S. Kaski (Eds.), Artificial Neural Networks and Machine Learning – ICANN 2011 (pp. 275-282). Berlin: Springer.

    Abstract

    A central question in the study of the mental lexicon is how morphologically complex words are processed. We consider this question from the viewpoint of statistical models of morphology. As an indicator of the mental processing cost in the brain, we use reaction times to words in a visual lexical decision task on Finnish nouns. Statistical correlation between a model and reaction times is employed as a goodness measure of the model. In particular, we study Morfessor, an unsupervised method for learning concatenative morphology. The results for a set of inflected and monomorphemic Finnish nouns reveal that the probabilities given by Morfessor, especially the Categories-MAP version, show considerably higher correlations to the reaction times than simple word statistics such as frequency, morphological family size, or length. These correlations are also higher than when any individual test subject is viewed as a model.
  • Von Stutterheim, C., Carroll, M., & Klein, W. (2003). Two ways of construing complex temporal structures. In F. Lenz (Ed.), Deictic conceptualization of space, time and person (pp. 97-133). Amsterdam: Benjamins.
  • Vonk, W., & Cozijn, R. (2003). On the treatment of saccades and regressions in eye movement measures of reading time. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind's eye: Cognitive and applied aspects of eye movement research (pp. 291-312). Amsterdam: Elsevier.
  • Vuong, L., Meyer, A. S., & Christiansen, M. H. (2011). Simultaneous online tracking of adjacent and non-adjacent dependencies in statistical learning. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 964-969). Austin, TX: Cognitive Science Society.
  • Wagner, A., & Braun, A. (2003). Is voice quality language-dependent? Acoustic analyses based on speakers of three different languages. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 651-654). Adelaide: Causal Productions.
  • Wagner, M., Tran, D., Togneri, R., Rose, P., Powers, D., Onslow, M., Loakes, D., Lewis, T., Kuratate, T., Kinoshita, Y., Kemp, N., Ishihara, S., Ingram, J., Hajek, J., Grayden, D., Göcke, R., Fletcher, J., Estival, D., Epps, J., Dale, R. and 11 moreWagner, M., Tran, D., Togneri, R., Rose, P., Powers, D., Onslow, M., Loakes, D., Lewis, T., Kuratate, T., Kinoshita, Y., Kemp, N., Ishihara, S., Ingram, J., Hajek, J., Grayden, D., Göcke, R., Fletcher, J., Estival, D., Epps, J., Dale, R., Cutler, A., Cox, F., Chetty, G., Cassidy, S., Butcher, A., Burnham, D., Bird, S., Best, C., Bennamoun, M., Arciuli, J., & Ambikairajah, E. (2011). The Big Australian Speech Corpus (The Big ASC). In M. Tabain, J. Fletcher, D. Grayden, J. Hajek, & A. Butcher (Eds.), Proceedings of the Thirteenth Australasian International Conference on Speech Science and Technology (pp. 166-170). Melbourne: ASSTA.
  • Warner, N. (2003). Rapid perceptibility as a factor underlying universals of vowel inventories. In A. Carnie, H. Harley, & M. Willie (Eds.), Formal approaches to function in grammar, in honor of Eloise Jelinek (pp. 245-261). Amsterdam: Benjamins.
  • Weber, A., Crocker, M., & Knoeferle, P. (2010). Conflicting constraints in resource-adaptive language comprehension. In M. W. Crocker, & J. Siekmann (Eds.), Resource-adaptive cognitive processes (pp. 119-141). New York: Springer.

    Abstract

    The primary goal of psycholinguistic research is to understand the architectures and mechanisms that underlie human language comprehension and production. This entails an understanding of how linguistic knowledge is represented and organized in the brain and a theory of how that knowledge is accessed when we use language. Research has traditionally emphasized purely linguistic aspects of on-line comprehension, such as the influence of lexical, syntactic, semantic and discourse constraints, and their tim -course. It has become increasingly clear, however, that nonlinguistic information, such as the visual environment, are also actively exploited by situated language comprehenders.
  • Weber, A., & Smits, R. (2003). Consonant and vowel confusion patterns by American English listeners. In M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences.

    Abstract

    This study investigated the perception of American English phonemes by native listeners. Listeners identified either the consonant or the vowel in all possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (0 dB, 8 dB, and 16 dB). Effects of syllable position, signal-to-noise ratio, and articulatory features on vowel and consonant identification are discussed. The results constitute the largest source of data that is currently available on phoneme confusion patterns of American English phonemes by native listeners.
  • Weber, A., & Smits, R. (2003). Consonant and vowel confusion patterns by American English listeners. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 1437-1440). Adelaide: Causal Productions.

    Abstract

    This study investigated the perception of American English phonemes by native listeners. Listeners identified either the consonant or the vowel in all possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signalto-noise ratios (0 dB, 8 dB, and 16 dB). Effects of syllable position, signal-to-noise ratio, and articulatory features on vowel and consonant identification are discussed. The results constitute the largest source of data that is currently available on phoneme confusion patterns of American English phonemes by native listeners.
  • Weber, A., & Poellmann, K. (2010). Identifying foreign speakers with an unfamiliar accent or in an unfamiliar language. In New Sounds 2010: Sixth International Symposium on the Acquisition of Second Language Speech (pp. 536-541). Poznan, Poland: Adam Mickiewicz University.
  • Weber, A. (1998). Listening to nonnative language which violates native assimilation rules. In D. Duez (Ed.), Proceedings of the European Scientific Communication Association workshop: Sound patterns of Spontaneous Speech (pp. 101-104).

    Abstract

    Recent studies using phoneme detection tasks have shown that spoken-language processing is neither facilitated nor interfered with by optional assimilation, but is inhibited by violation of obligatory assimilation. Interpretation of these results depends on an assessment of their generality, specifically, whether they also obtain when listeners are processing nonnative language. Two separate experiments are presented in which native listeners of German and native listeners of Dutch had to detect a target fricative in legal monosyllabic Dutch nonwords. All of the nonwords were correct realisations in standard Dutch. For German listeners, however, half of the nonwords contained phoneme strings which violate the German fricative assimilation rule. Whereas the Dutch listeners showed no significant effects, German listeners detected the target fricative faster when the German fricative assimilation was violated than when no violation occurred. The results might suggest that violation of assimilation rules does not have to make processing more difficult per se.
  • Wegener, C. (2011). Expression of reciprocity in Savosavo. In N. Evans, A. Gaby, S. C. Levinson, & A. Majid (Eds.), Reciprocals and semantic typology (pp. 213-224). Amsterdam: Benjamins.

    Abstract

    This paper describes how reciprocity is expressed in the Papuan (i.e. non-Austronesian­) language Savosavo, spoken in the Solomon Islands. The main strategy is to use the reciprocal nominal mapamapa, which can occur in different NP positions and always triggers default third person singular masculine agreement, regardless of the number and gender of the referents. After a description of this as well as another strategy that is occasionally used (the ‘joint activity construction’), the paper will provide a detailed analysis of data elicited with set of video stimuli and show that the main strategy is used to describe even clearly asymmetric situations, as long as more than one person acts on more than one person in a joint activity.
  • Wender, K. F., Haun, D. B. M., Rasch, B. H., & Blümke, M. (2003). Context effects in memory for routes. In C. Freksa, W. Brauer, C. Habel, & K. F. Wender (Eds.), Spatial cognition III: Routes and navigation, human memory and learning, spatial representation and spatial learning (pp. 209-231). Berlin: Springer.
  • Wilkin, K., & Holler, J. (2011). Speakers’ use of ‘action’ and ‘entity’ gestures with definite and indefinite references. In G. Stam, & M. Ishino (Eds.), Integrating gestures: The interdisciplinary nature of gesture (pp. 293-308). Amsterdam: John Benjamins.

    Abstract

    Common ground is an essential prerequisite for coordination in social interaction, including language use. When referring back to a referent in discourse, this referent is ‘given information’ and therefore in the interactants’ common ground. When a referent is being referred to for the first time, a speaker introduces ‘new information’. The analyses reported here are on gestures that accompany such references when they include definite and indefinite grammatical determiners. The main finding from these analyses is that referents referred to by definite and indefinite articles were equally often accompanied by gesture, but speakers tended to accompany definite references with gestures focusing on action information and indefinite references with gestures focusing on entity information. The findings suggest that speakers use speech and gesture together to design utterances appropriate for speakers with whom they share common ground.

    Files private

    Request files
  • Willems, R. M., Labruna, L., D'Esposito, M., Ivry, R., & Casasanto, D. (2010). A functional role for the motor system in language understanding: Evidence from rTMS [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 127). York: University of York.
  • Willems, R. M., & Hagoort, P. (2010). Cortical motor contributions to language understanding. In L. Hermer (Ed.), Reciprocal interactions among early sensory and motor areas and higher cognitive networks (pp. 51-72). Kerala, India: Research Signpost Press.

    Abstract

    Here we review evidence from cognitive neuroscience for a tight relation between language and action in the brain. We focus on two types of relation between language and action. First, we investigate whether the perception of speech and speech sounds leads to activation of parts of the cortical motor system also involved in speech production. Second, we evaluate whether understanding action-related language involves the activation of parts of the motor system. We conclude that whereas there is considerable evidence that understanding language can involve parts of our motor cortex, this relation is best thought of as inherently flexible. As we explain, the exact nature of the input as well as the intention with which language is perceived influences whether and how motor cortex plays a role in language processing.
  • Wilson, J. J., & Little, H. (2016). A Neo-Peircean framework for experimental semiotics. In Proceedings of the 2nd Conference of the International Association for Cognitive Semiotics (pp. 171-173).
  • Windhouwer, M., Petro, J., Newskaya, I., Drude, S., Aristar-Dry, H., & Gippert, J. (2013). Creating a serialization of LMF: The experience of the RELISH project. In G. Francopoulo (Ed.), LMF - Lexical Markup Framework (pp. 215-226). London: Wiley.
  • Windhouwer, M., Kemps-Snijders, M., Trilsbeek, P., Moreira, A., Van der Veen, B., Silva, G., & Von Rhein, D. (2016). FLAT: Constructing a CLARIN Compatible Home for Language Resources. In K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, & A. Moreno (Eds.), Proccedings of LREC 2016: 10th International Conference on Language Resources and Evalution (pp. 2478-2483). Paris: European Language Resources Association (ELRA).

    Abstract

    Language resources are valuable assets, both for institutions and researchers. To safeguard these resources requirements for repository systems and data management have been specified by various branch organizations, e.g., CLARIN and the Data Seal of Approval. This paper describes these and some additional ones posed by the authors’ home institutions. And it shows how they are met by FLAT, to provide a new home for language resources. The basis of FLAT is formed by the Fedora Commons repository system. This repository system can meet many of the requirements out-of-the box, but still additional configuration and some development work is needed to meet the remaining ones, e.g., to add support for Handles and Component Metadata. This paper describes design decisions taken in the construction of FLAT’s system architecture via a mix-and-match strategy, with a preference for the reuse of existing solutions. FLAT is developed and used by the a Institute and The Language Archive, but is also freely available for anyone in need of a CLARIN-compliant repository for their language resources.
  • Windhouwer, M., & Wright, S. E. (2013). LMF and the Data Category Registry: Principles and application. In G. Francopoulo (Ed.), LMF: Lexical Markup Framework (pp. 41-50). London: Wiley.
  • Wittek, A. (1998). Learning verb meaning via adverbial modification: Change-of-state verbs in German and the adverb "wieder" again. In A. Greenhill, M. Hughes, H. Littlefield, & H. Walsh (Eds.), Proceedings of the 22nd Annual Boston University Conference on Language Development (pp. 779-790). Somerville, MA: Cascadilla Press.
  • Witteman, M. J., Bardhan, N. P., Weber, A., & McQueen, J. M. (2011). Adapting to foreign-accented speech: The role of delay in testing. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2443.

    Abstract

    Understanding speech usually seems easy, but it can become noticeably harder when the speaker has a foreign accent. This is because foreign accents add considerable variation to speech. Research on foreign-accented speech shows that participants are able to adapt quickly to this type of variation. Less is known, however, about longer-term maintenance of adaptation. The current study focused on long-term adaptation by exposing native listeners to foreign-accented speech on Day 1, and testing them on comprehension of the accent one day later. Comprehension was thus not tested immediately, but only after a 24 hour period. On Day 1, native Dutch listeners listened to the speech of a Hebrew learner of Dutch while performing a phoneme monitoring task that did not depend on the talker’s accent. In particular, shortening of the long vowel /i/ into /ɪ/ (e.g., lief [li:f], ‘sweet’, pronounced as [lɪf]) was examined. These mispronunciations did not create lexical ambiguities in Dutch. On Day 2, listeners participated in a cross-modal priming task to test their comprehension of the accent. The results will be contrasted with results from an experiment without delayed testing and related to accounts of how listeners maintain adaptation to foreign-accented speech.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2011). On the relationship between perceived accentedness, acoustic similarity, and processing difficulty in foreign-accented speech. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 2229-2232).

    Abstract

    Foreign-accented speech is often perceived as more difficult to understand than native speech. What causes this potential difficulty, however, remains unknown. In the present study, we compared acoustic similarity and accent ratings of American-accented Dutch with a cross-modal priming task designed to measure online speech processing. We focused on two Dutch diphthongs: ui and ij. Though both diphthongs deviated from standard Dutch to varying degrees and perceptually varied in accent strength, native Dutch listeners recognized words containing the diphthongs easily. Thus, not all foreign-accented speech hinders comprehension, and acoustic similarity and perceived accentedness are not always predictive of processing difficulties.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2010). Rapid and long-lasting adaptation to foreign-accented speech [Abstract]. Journal of the Acoustical Society of America, 128, 2486.

    Abstract

    In foreign-accented speech, listeners have to handle noticeable deviations from the standard pronunciation of a target language. Three cross-modal priming experiments investigated how short- and long-term experiences with a foreign accent influence word recognition by native listeners. In experiment 1, German-accented words were presented to Dutch listeners who had either extensive or limited prior experience with German-accented Dutch. Accented words either contained a diphthong substitution that deviated acoustically quite largely from the canonical form (huis [hys], "house", pronounced as [hoys]), or that deviated acoustically to a lesser extent (lijst [lst], "list", pronounced as [lst]). The mispronunciations never created lexical ambiguity in Dutch. While long-term experience facilitated word recognition for both types of substitutions, limited experience facilitated recognition only of words with acoustically smaller deviations. In experiment 2, Dutch listeners with limited experience listened to the German speaker for 4 min before participating in the cross-modal priming experiment. The results showed that speaker-specific learning effects for acoustically large deviations can be obtained already after a brief exposure, as long as the exposure contains evidence of the deviations. Experiment 3 investigates whether these short-term adaptation effects for foreign-accented speech are speaker-independent.
  • Wittenburg, P. (2010). Culture change in data management. In V. Luzar-Stiffler, I. Jarec, & Z. Bekic (Eds.), Proceedings of the ITI 2010, 32nd International Conference on Information Technology Interfaces (pp. 43 -48). Zagreb, Croatia: University of Zagreb.

    Abstract

    In the emerging e-Science scenario users should be able to easily combine data resources and tools/services; and machines should automatically be able to trace paths and carry out interpretations. Users who want to participate need to move from a down-load first to a cyberinfrastructure paradigm, thus increasing their dependency on the seamless operation of all components in the Internet. Such a scenario is inherently complex and requires compliance to guidelines and standards to keep it working smoothly. Only a change in our culture of dealing with research data and awareness about the way we do data lifecycle management will lead to success. Since we have so many legacy resources that are not compliant with the required guidelines, since we need to admit obvious problems in particular with standardization in the area of semantics and since it will take much time to establish trust at the side of researchers, the e-Science scenario can only be achieved stepwise which will take much time.
  • Wittenburg, P., & Trilsbeek, P. (2010). Digital archiving - a necessity in documentary linguistics. In G. Senft (Ed.), Endangered Austronesian and Australian Aboriginal languages: Essays on language documentation, archiving and revitalization (pp. 111-136). Canberra: Pacific Linguistics.
  • Wittenburg, P., & Ringersma, J. (2013). Metadata description for lexicons. In R. H. Gouws, U. Heid, W. Schweickard, & H. E. Wiegand (Eds.), Dictionaries: An international encyclopedia of lexicography: Supplementary volume: Recent developments with focus on electronic and computational lexicography (pp. 1329-1335). Berlin: Mouton de Gruyter.
  • Wittenburg, P., Trilsbeek, P., & Lenkiewicz, P. (2010). Large multimedia archive for world languages. In SSCS'10 - Proceedings of the 2010 ACM Workshop on Searching Spontaneous Conversational Speech, Co-located with ACM Multimedia 2010 (pp. 53-56). New York: Association for Computing Machinery, Inc. (ACM). doi:10.1145/1878101.1878113.

    Abstract

    In this paper, we describe the core pillars of a large archive oflanguage material recorded worldwide partly about languages that are highly endangered. The bases for the documentation of these languages are audio/video recordings which are then annotated at several linguistic layers. The digital age completely changed the requirements of long-term preservation and it is discussed how the archive met these new challenges. An extensive solution for data replication has been worked out to guarantee bit-stream preservation. Due to an immediate conversion of the incoming data to standards -based formats and checks at upload time lifecycle management of all 50 Terabyte of data is widely simplified. A suitable metadata framework not only allowing users to describe and discover resources, but also allowing them to organize their resources is enabling the management of this amount of resources very efficiently. Finally, it is the Language Archiving Technology software suite which allows users to create, manipulate, access and enrich all archived resources given that they have access permissions.
  • Wittenburg, P., Bel, N., Borin, L., Budin, G., Calzolari, N., Hajicova, E., Koskenniemi, K., Lemnitzer, L., Maegaard, B., Piasecki, M., Pierrel, J.-M., Piperidis, S., Skadina, I., Tufis, D., Van Veenendaal, R., Váradi, T., & Wynne, M. (2010). Resource and service centres as the backbone for a sustainable service infrastructure. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 60-63). European Language Resources Association (ELRA).

    Abstract

    Currently, research infrastructures are being designed and established in manydisciplines since they all suffer from an enormous fragmentation of theirresources and tools. In the domain of language resources and tools the CLARINinitiative has been funded since 2008 to overcome many of the integration andinteroperability hurdles. CLARIN can build on knowledge and work from manyprojects that were carried out during the last years and wants to build stableand robust services that can be used by researchers. Here service centres willplay an important role that have the potential of being persistent and thatadhere to criteria as they have been established by CLARIN. In the last year ofthe so-called preparatory phase these centres are currently developing four usecases that can demonstrate how the various pillars CLARIN has been working oncan be integrated. All four use cases fulfil the criteria of beingcross-national.
  • Wnuk, E. (2016). Specificity at the basic level in event taxonomies: The case of Maniq verbs of ingestion. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 2687-2692). Austin, TX: Cognitive Science Society.

    Abstract

    Previous research on basic-level object categories shows there is cross-cultural variation in basic-level concepts, arguing against the idea that the basic level reflects an objective reality. In this paper, I extend the investigation to the domain of events. More specifically, I present a case study of verbs of ingestion in Maniq illustrating a highly specific categorization of ingestion events at the basic level. A detailed analysis of these verbs reveals they tap into culturally salient notions. Yet, cultural salience alone cannot explain specificity of basic-level verbs, since ingestion is a domain of universal human experience. Further analysis reveals, however, that another key factor is the language itself. Maniq’s preference for encoding specific meaning in basic-level verbs is not a peculiarity of one domain, but a recurrent characteristic of its verb lexicon, pointing to the significant role of the language system in the structure of event concepts
  • Wright, S. E., Windhouwer, M., Schuurman, I., & Kemps-Snijders, M. (2013). Community efforts around the ISOcat Data Category Registry. In I. Gurevych, & J. Kim (Eds.), The People's Web meets NLP: Collaboratively constructed language resources (pp. 349-374). New York: Springer.

    Abstract

    The ISOcat Data Category Registry provides a community computing environment for creating, storing, retrieving, harmonizing and standardizing data category specifications (DCs), used to register linguistic terms used in various fields. This chapter recounts the history of DC documentation in TC 37, beginning from paper-based lists created for lexicographers and terminologists and progressing to the development of a web-based resource for a much broader range of users. While describing the considerable strides that have been made to collect a very large comprehensive collection of DCs, it also outlines difficulties that have arisen in developing a fully operative web-based computing environment for achieving consensus on data category names, definitions, and selections and describes efforts to overcome some of the present shortcomings and to establish positive working procedures designed to engage a wide range of people involved in the creation of language resources.
  • Zeshan, U., & Panda, S. (2011). Reciprocals constructions in Indo-Pakistani sign language. In N. Evans, & A. Gaby (Eds.), Reciprocals and semantic typology (pp. 91-113). Amsterdam: Benjamins.

    Abstract

    Indo-Pakistani Sign Language (IPSL) is the sign language used by deaf communities in a large region across India and Pakistan. This visual-gestural language has a dedicated construction for specifically expressing reciprocal relationships, which can be applied to agreement verbs and to auxiliaries. The reciprocal construction relies on a change in the movement pattern of the signs it applies to. In addition, IPSL has a number of other strategies which can have a reciprocal interpretation, and the IPSL lexicon includes a good number of inherently reciprocal signs. All reciprocal expressions can be modified in complex ways that rely on the grammatical use of the sign space. Considering grammaticalisation and lexicalisation processes linking some of these constructions is also important for a better understanding of reciprocity in IPSL.
  • Zhang, Y., & Yu, C. (2016). Examining referential uncertainty in naturalistic contexts from the child’s view: Evidence from an eye-tracking study with infants. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Austin, TX: Cognitive Science Society (pp. 2027-2032). Austin, TX: Cognitive Science Society.

    Abstract

    Young Infants are prolific word learners even though they are facing the challenge of referential uncertainty (Quine, 1960). Many laboratory studies have shown that infants are skilled at inferring correct referents of words from ambiguous contexts (Swingley, 2009). However, little is known regarding how they visually attend to and select the target object among many other objects in view when parents name it during everyday interactions. By investigating the looking pattern of 12-month-old infants using naturalistic first-person images with varying degrees of referential ambiguity, we found that infants’ attention is selective and they only select a small subset of objects to attend to at each learning instance despite the complexity of the data in the real world. This work allows us to better understand how perceptual properties of objects in infants’ view influence their visual attention, which is also related to how they select candidate objects to build word-object mappings.
  • Zinn, C., Wittenburg, P., & Ringersma, J. (2010). An evolving eScience environment for research data in linguistics. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 894-899). European Language Resources Association (ELRA).

    Abstract

    The amount of research data in the Humanities is increasing at fastspeed. Metadata helps describing and making accessible this data tointerested researchers within and across institutions. While metadatainteroperability is an issue that is being recognised and addressed,the systematic and user-driven provision of annotations and thelinking together of resources into new organisational layers havereceived much less attention. This paper gives an overview of ourevolving technological eScience environment to support suchfunctionality. It describes two tools, ADDIT and ViCoS, which enableresearchers, rather than archive managers, to organise and reorganiseresearch data to fit their particular needs. The two tools, which areembedded into our institute's existing software landscape, are aninitial step towards an eScience environment that gives our scientistseasy access to (multimodal) research data of their interest, andempowers them to structure, enrich, link together, and share such dataas they wish.
  • Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Expression of multiple entities in Turkish Sign Language (TİD). In E. Arik (Ed.), Current Directions in Turkish Sign Language Research (pp. 272-302). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    This paper reports on an exploration of the ways in which multiple entities are expressed in Turkish Sign Language (TİD). The (descriptive and quantitative) analyses provided are based on a corpus of both spontaneous data and specifically elicited data, in order to provide as comprehensive an account as possible. We have found several devices in TİD for expression of multiple entities, in particular localization, spatial plural predicate inflection, and a specific form used to express multiple entities that are side by side in the same configuration (not reported for any other sign language to date), as well as numerals and quantifiers. In contrast to some other signed languages, TİD does not appear to have a productive system of plural reduplication. We argue that none of the devices encountered in the TİD data is a genuine plural marking device and that the plural interpretation of multiple entity localizations and plural predicate inflections is a by-product of the use of space to indicate the existence or the involvement in an event of multiple entities.
  • Zwitserlood, I. (2003). Word formation below and above little x: Evidence from Sign Language of the Netherlands. In Proceedings of SCL 19. Nordlyd Tromsø University Working Papers on Language and Linguistics (pp. 488-502).

    Abstract

    Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT), with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM), which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.

Share this page