Publications

Displaying 301 - 400 of 411
  • Robotham, L., Trinkler, I., & Sauter, D. (2008). The power of positives: Evidence for an overall emotional recognition deficit in Huntington's disease [Abstract]. Journal of Neurology, Neurosurgery & Psychiatry, 79, A12.

    Abstract

    The recognition of emotions of disgust, anger and fear have been shown to be significantly impaired in Huntington’s disease (eg,Sprengelmeyer et al, 1997, 2006; Gray et al, 1997; Milders et al, 2003,Montagne et al, 2006; Johnson et al, 2007; De Gelder et al, 2008). The relative impairment of these emotions might have implied a recognition impairment specific to negative emotions. Could the asymmetric recognition deficits be due not to the complexity of the emotion but rather reflect the complexity of the task? In the current study, 15 Huntington’s patients and 16 control subjects were presented with negative and positive non-speech emotional vocalisations that were to be identified as anger, fear, sadness, disgust, achievement, pleasure and amusement in a forced-choice paradigm. This experiment more accurately matched the negative emotions with positive emotions in a homogeneous modality. The resulting dually impaired ability of Huntington’s patients to identify negative and positive non-speech emotional vocalisations correctly provides evidence for an overall emotional recognition deficit in the disease. These results indicate that previous findings of a specificity in emotional recognition deficits might instead be due to the limitations of the visual modality. Previous experiments may have found an effect of emotional specificy due to the presence of a single positive emotion, happiness, in the midst of multiple negative emotions. In contrast with the previous literature, the study presented here points to a global deficit in the recognition of emotional sounds.
  • Roelofs, A. (2003). Modeling the relation between the production and recognition of spoken word forms. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 115-158). Berlin: Mouton de Gruyter.
  • Rossi, G. (2014). When do people not use language to make requests? In P. Drew, & E. Couper-Kuhlen (Eds.), Requesting in social interaction (pp. 301-332). Amsterdam: John Benjamins.

    Abstract

    In everyday joint activities (e.g. playing cards, preparing potatoes, collecting empty plates), participants often request others to pass, move or otherwise deploy objects. In order to get these objects to or from the requestee, requesters need to manipulate them, for example by holding them out, reaching for them, or placing them somewhere. As they perform these manual actions, requesters may or may not accompany them with language (e.g. Take this potato and cut it or Pass me your plate). This study shows that adding or omitting language in the design of a request is influenced in the first place by a criterion of recognition. When the requested action is projectable from the advancement of an activity, presenting a relevant object to the requestee is enough for them to understand what to do; when, on the other hand, the requested action is occasioned by a contingent development of the activity, requesters use language to specify what the requestee should do. This criterion operates alongside a perceptual criterion, to do with the affordances of the visual and auditory modality. When the requested action is projectable but the requestee is not visually attending to the requester’s manual behaviour, the requester can use just enough language to attract the requestee’s attention and secure immediate recipiency. This study contributes to a line of research concerned with the organisation of verbal and nonverbal resources for requesting. Focussing on situations in which language is not – or only minimally – used, it demonstrates the role played by visible bodily behaviour and by the structure of everyday activities in the formation and understanding of requests.
  • Rowland, C. F., Noble, C. H., & Chan, A. (2014). Competition all the way down: How children learn word order cues to sentence meaning. In B. MacWhinney, A. Malchukov, & E. Moravcsik (Eds.), Competing Motivations in Grammar and Usage (pp. 125-143). Oxford: Oxford University Press.

    Abstract

    Most work on competing cues in language acquisition has focussed on what happens when cues compete within a certain construction. There has been far less work on what happens when constructions themselves compete. The aim of the present chapter was to explore how the acquisition mechanism copes when constructions compete in a language. We present three experimental studies, all of which focus on the acquisition of the syntactic function of word order as a marker of the Theme-Recipient relation in ditransitives (form-meaning mapping). In Study 1 we investigated how quickly English children acquire form-meaning mappings when there are two competing structures in the language. We demonstrated that English speaking 4-year- olds, but not 3-year-olds, correctly interpreted both preposition al and double object datives, assigning Theme and Recipient participant roles on the basis of word order cues. There was no advantage for the double object dative despite its greater frequency in child directed speech. In Study 2 we looked at acquisition in a language which has no dative alternation –Welsh–to investigate how quickly children acquire form-meaning mapping when there is no competing structure. We demonstrated that Welsh children (Study 2) acquired the prepositional dative at age 3 years, which was much earlier than English children. Finally, in Study 3 we examined bei2 (give) ditransitives in Cantonese, to investigate what happens when there is no dative alternation (as in Welsh), but when the child hears alternative, and possibly competing, word orders in the input. Like the English 3-year-olds, the Cantonese 3-year-olds had not yet acquired the word order marking constraints of bei2 ditransitives. We conclude that there is not only competition between cues but competition between constructions in language acquisition. We suggest an extension to the competition model (Bates & MacWhinney, 1982) whereby generalisations take place across constructions as easily as they take place within constructions, whenever there are salient similarities to form the basis of the generalisation.
  • Rubio-Fernández, P., Breheny, R., & Lee, M. W. (2003). Context-independent information in concepts: An investigation of the notion of ‘core features’. In Proceedings of the 25th Annual Conference of the Cognitive Science Society (CogSci 2003). Austin, TX: Cognitive Science Society.
  • De Ruiter, J. P. (2003). The function of hand gesture in spoken conversation. In M. Bickenbach, A. Klappert, & H. Pompe (Eds.), Manus Loquens: Medium der Geste, Gesten der Medien (pp. 338-347). Cologne: DuMont.
  • De Ruiter, J. P. (2003). A quantitative model of Störung. In A. Kümmel, & E. Schüttpelz (Eds.), Signale der Störung (pp. 67-81). München: Wilhelm Fink Verlag.
  • De Ruiter, L. E. (2008). How useful are polynomials for analyzing intonation? In Proceedings of Interspeech 2008 (pp. 785-789).

    Abstract

    This paper presents the first application of polynomial modeling as a means for validating phonological pitch accent labels to German data. It is compared to traditional phonetic analysis (measuring minima, maxima, alignment). The traditional method fares better in classification, but results are comparable in statistical accent pair testing. Robustness tests show that pitch correction is necessary in both cases. The approaches are discussed in terms of their practicability, applicability to other domains of research and interpretability of their results.
  • Sauter, D., Eisner, F., Rosen, S., & Scott, S. K. (2008). The role of source and filter cues in emotion recognition in speech [Abstract]. Journal of the Acoustical Society of America, 123, 3739-3740.

    Abstract

    In the context of the source-filter theory of speech, it is well established that intelligibility is heavily reliant on information carried by the filter, that is, spectral cues (e.g., Faulkner et al., 2001; Shannon et al., 1995). However, the extraction of other types of information in the speech signal, such as emotion and identity, is less well understood. In this study we investigated the extent to which emotion recognition in speech depends on filterdependent cues, using a forced-choice emotion identification task at ten levels of noise-vocoding ranging between one and 32 channels. In addition, participants performed a speech intelligibility task with the same stimuli. Our results indicate that compared to speech intelligibility, emotion recognition relies less on spectral information and more on cues typically signaled by source variations, such as voice pitch, voice quality, and intensity. We suggest that, while the reliance on spectral dynamics is likely a unique aspect of human speech, greater phylogenetic continuity across species may be found in the communication of affect in vocalizations.
  • Sauter, D. (2008). The time-course of emotional voice processing [Abstract]. Neurocase, 14, 455-455.

    Abstract

    Research using event-related brain potentials (ERPs) has demonstrated an early differential effect in fronto-central regions when processing emotional, as compared to affectively neutral facial stimuli (e.g., Eimer & Holmes, 2002). In this talk, data demonstrating a similar effect in the auditory domain will be presented. ERPs were recorded in a one-back task where participants had to identify immediate repetitions of emotion category, such as a fearful sound followed by another fearful sound. The stimulus set consisted of non-verbal emotional vocalisations communicating positive and negative sounds, as well as neutral baseline conditions. Similarly to the facial domain, fear sounds as compared to acoustically controlled neutral sounds, elicited a frontally distributed positivity with an onset latency of about 150 ms after stimulus onset. These data suggest the existence of a rapid multi-modal frontocentral mechanism discriminating emotional from non-emotional human signals.
  • Scharenborg, O., & Cooke, M. P. (2008). Comparing human and machine recognition performance on a VCV corpus. In ISCA Tutorial and Research Workshop (ITRW) on "Speech Analysis and Processing for Knowledge Discovery".

    Abstract

    Listeners outperform ASR systems in every speech recognition task. However, what is not clear is where this human advantage originates. This paper investigates the role of acoustic feature representations. We test four (MFCCs, PLPs, Mel Filterbanks, Rate Maps) acoustic representations, with and without ‘pitch’ information, using the same backend. The results are compared with listener results at the level of articulatory feature classification. While no acoustic feature representation reached the levels of human performance, both MFCCs and Rate maps achieved good scores, with Rate maps nearing human performance on the classification of voicing. Comparing the results on the most difficult articulatory features to classify showed similarities between the humans and the SVMs: e.g., ‘dental’ was by far the least well identified by both groups. Overall, adding pitch information seemed to hamper classification performance.
  • Scharenborg, O., McQueen, J. M., Ten Bosch, L., & Norris, D. (2003). Modelling human speech recognition using automatic speech recognition paradigms in SpeM. In Proceedings of Eurospeech 2003 (pp. 2097-2100). Adelaide: Causal Productions.

    Abstract

    We have recently developed a new model of human speech recognition, based on automatic speech recognition techniques [1]. The present paper has two goals. First, we show that the new model performs well in the recognition of lexically ambiguous input. These demonstrations suggest that the model is able to operate in the same optimal way as human listeners. Second, we discuss how to relate the behaviour of a recogniser, designed to discover the optimum path through a word lattice, to data from human listening experiments. We argue that this requires a metric that combines both path-based and word-based measures of recognition performance. The combined metric varies continuously as the input speech signal unfolds over time.
  • Scharenborg, O. (2008). Modelling fine-phonetic detail in a computational model of word recognition. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1473-1476). ISCA Archive.

    Abstract

    There is now considerable evidence that fine-grained acoustic-phonetic detail in the speech signal helps listeners to segment a speech signal into syllables and words. In this paper, we compare two computational models of word recognition on their ability to capture and use this finephonetic detail during speech recognition. One model, SpeM, is phoneme-based, whereas the other, newly developed Fine- Tracker, is based on articulatory features. Simulations dealt with modelling the ability of listeners to distinguish short words (e.g., ‘ham’) from the longer words in which they are embedded (e.g., ‘hamster’). The simulations with Fine- Tracker showed that it was, like human listeners, able to distinguish between short words from the longer words in which they are embedded. This suggests that it is possible to extract this fine-phonetic detail from the speech signal and use it during word recognition.
  • Scharenborg, O., ten Bosch, L., & Boves, L. (2003). Recognising 'real-life' speech with SpeM: A speech-based computational model of human speech recognition. In Eurospeech 2003 (pp. 2285-2288).

    Abstract

    In this paper, we present a novel computational model of human speech recognition – called SpeM – based on the theory underlying Shortlist. We will show that SpeM, in combination with an automatic phone recogniser (APR), is able to simulate the human speech recognition process from the acoustic signal to the ultimate recognition of words. This joint model takes an acoustic speech file as input and calculates the activation flows of candidate words on the basis of the degree of fit of the candidate words with the input. Experiments showed that SpeM outperforms Shortlist on the recognition of ‘real-life’ input. Furthermore, SpeM performs only slightly worse than an off-the-shelf full-blown automatic speech recogniser in which all words are equally probable, while it provides a transparent computationally elegant paradigm for modelling word activations in human word recognition.
  • Schiller, N. O. (2003). Metrical stress in speech production: A time course study. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 451-454). Adelaide: Causal Productions.

    Abstract

    This study investigated the encoding of metrical information during speech production in Dutch. In Experiment 1, participants were asked to judge whether bisyllabic picture names had initial or final stress. Results showed significantly faster decision times for initially stressed targets (e.g., LEpel 'spoon') than for targets with final stress (e.g., liBEL 'dragon fly'; capital letters indicate stressed syllables) and revealed that the monitoring latencies are not a function of the picture naming or object recognition latencies to the same pictures. Experiments 2 and 3 replicated the outcome of the first experiment with bi- and trisyllabic picture names. These results demonstrate that metrical information of words is encoded rightward incrementally during phonological encoding in speech production. The results of these experiments are in line with Levelt's model of phonological encoding.
  • Schiller, N. O., & Meyer, A. S. (2003). Introduction to the relation between speech comprehension and production. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 1-8). Berlin: Mouton de Gruyter.
  • Schmidt, J., Janse, E., & Scharenborg, O. (2014). Age, hearing loss and the perception of affective utterances in conversational speech. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 1929-1933).

    Abstract

    This study investigates whether age and/or hearing loss influence the perception of the emotion dimensions arousal (calm vs. aroused) and valence (positive vs. negative attitude) in conversational speech fragments. Specifically, this study focuses on the relationship between participants' ratings of affective speech and acoustic parameters known to be associated with arousal and valence (mean F0, intensity, and articulation rate). Ten normal-hearing younger and ten older adults with varying hearing loss were tested on two rating tasks. Stimuli consisted of short sentences taken from a corpus of conversational affective speech. In both rating tasks, participants estimated the value of the emotion dimension at hand using a 5-point scale. For arousal, higher intensity was generally associated with higher arousal in both age groups. Compared to younger participants, older participants rated the utterances as less aroused, and showed a smaller effect of intensity on their arousal ratings. For valence, higher mean F0 was associated with more negative ratings in both age groups. Generally, age group differences in rating affective utterances may not relate to age group differences in hearing loss, but rather to other differences between the age groups, as older participants' rating patterns were not associated with their individual hearing loss.
  • Schmidt, T., Duncan, S., Ehmer, O., Hoyt, J., Kipp, M., Loehr, D., Magnusson, M., Rose, T., & Sloetjes, H. (2008). An exchange format for multimodal annotations. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    This paper presents the results of a joint effort of a group of multimodality researchers and tool developers to improve the interoperability between several tools used for the annotation of multimodality. We propose a multimodal annotation exchange format, based on the annotation graph formalism, which is supported by import and export routines in the respective tools
  • Schmiedtová, B. (2003). The use of aspect in Czech L2. In D. Bittner, & N. Gagarina (Eds.), ZAS Papers in Linguistics (pp. 177-194). Berlin: Zentrum für Allgemeine Sprachwissenschaft.
  • Schmiedtová, B. (2003). Aspekt und Tempus im Deutschen und Tschechischen: Eine vergleichende Studie. In S. Höhne (Ed.), Germanistisches Jahrbuch Tschechien - Slowakei: Schwerpunkt Sprachwissenschaft (pp. 185-216). Praha: Lidové noviny.
  • Schmiedtova, B., & Flecken, M. (2008). The role of aspectual distinctions in event encoding: Implications for second language acquisition. In S. Müller-de Knop, & T. Mortelmans (Eds.), Pedagogical grammar (pp. 357-384). Berlin: Mouton de Gruyter.
  • Schoffelen, J.-M., & Gross, J. (2014). Studying dynamic neural interactions with MEG. In S. Supek, & C. J. Aine (Eds.), Magnetoencephalography: From signals to dynamic cortical networks (pp. 405-427). Berlin: Springer.
  • Schreuder, R., Burani, C., & Baayen, R. H. (2003). Parsing and semantic opacity. In E. M. Assink, & D. Sandra (Eds.), Reading complex words (pp. 159-189). Dordrecht: Kluwer.
  • Schuppler, B., Ernestus, M., Scharenborg, O., & Boves, L. (2008). Preparing a corpus of Dutch spontaneous dialogues for automatic phonetic analysis. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1638-1641). ISCA Archive.

    Abstract

    This paper presents the steps needed to make a corpus of Dutch spontaneous dialogues accessible for automatic phonetic research aimed at increasing our understanding of reduction phenomena and the role of fine phonetic detail. Since the corpus was not created with automatic processing in mind, it needed to be reshaped. The first part of this paper describes the actions needed for this reshaping in some detail. The second part reports the results of a preliminary analysis of the reduction phenomena in the corpus. For this purpose a phonemic transcription of the corpus was created by means of a forced alignment, first with a lexicon of canonical pronunciations and then with multiple pronunciation variants per word. In this study pronunciation variants were generated by applying a large set of phonetic processes that have been implicated in reduction to the canonical pronunciations of the words. This relatively straightforward procedure allows us to produce plausible pronunciation variants and to verify and extend the results of previous reduction studies reported in the literature.
  • Seidl, A., & Johnson, E. K. (2003). Position and vowel quality effects in infant's segmentation of vowel-initial words. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 2233-2236). Adelaide: Causal Productions.
  • Seifart, F. (2003). Encoding shape: Formal means and semantic distinctions. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 57-59). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877660.

    Abstract

    The basic idea behind this task is to find out how languages encode basic shape distinctions such as dimensionality, axial geometry, relative size, etc. More specifically, we want to find out (i) which formal means are used cross linguistically to encode basic shape distinctions, and (ii) which are the semantic distinctions that are made in this domain. In languages with many shape-classifiers, these distinctions are encoded (at least partially) in classifiers. In other languages, positional verbs, descriptive modifiers, such as “flat”, “round”, or nouns such as “cube”, “ball”, etc. might be the preferred means. In this context, we also want to investigate what other “grammatical work” shapeencoding expressions possibly do in a given language, e.g. unitization of mass nouns, or anaphoric uses of shape-encoding classifiers, etc. This task further seeks to determine the role of shape-related parameters which underlie the design of objects in the semantics of the system under investigation.
  • Senft, G. (2008). The teaching of Tokunupei. In J. Kommers, & E. Venbrux (Eds.), Cultural styles of knowledge transmission: Essays in honour of Ad Borsboom (pp. 139-144). Amsterdam: Aksant.

    Abstract

    The paper describes how the documentation of a popular song of the adolescents of Tauwema in 1982 lead to the collection of the myth of Imdeduya and Yolina, one of the most important myths of the Trobriand Islands. When I returned to my fieldsite in 1989 Tokunupei, one of my best consultants in Tauwema, remembered my interest in the myth and provided me with further information on this topic. Tokunupei's teachings open up an important access to Trobriand eschatology.
  • Senft, G. (2003). Wosi Milamala: Weisen von Liebe und Tod auf den Trobriand Inseln. In I. Bobrowski (Ed.), Anabasis: Prace Ofiarowane Professor Krystynie Pisarkowej (pp. 289-295). Kraków: LEXIS.
  • Senft, G. (2003). Zur Bedeutung der Sprache für die Feldforschung. In B. Beer (Ed.), Methoden und Techniken der Feldforschung (pp. 55-70). Berlin: Reimer.
  • Senft, G. (2008). Zur Bedeutung der Sprache für die Feldforschung. In B. Beer (Ed.), Methoden und Techniken der Feldforschung (pp. 103-118). Berlin: Reimer.
  • Senft, G. (2003). Ethnographic Methods. In W. Deutsch, T. Hermann, & G. Rickheit (Eds.), Psycholinguistik - Ein internationales Handbuch [Psycholinguistics - An International Handbook] (pp. 106-114). Berlin: Walter de Gruyter.
  • Senft, G. (2003). Ethnolinguistik. In B. Beer, & H. Fischer (Eds.), Ethnologie: Einführung und Überblick. 5. Aufl., Neufassung (pp. 255-270). Berlin: Reimer.
  • Senft, G. (2008). Event conceptualization and event report in serial verb constructions in Kilivila: Towards a new approach to research and old phenomenon. In G. Senft (Ed.), Serial verb constructions in Austronesian and Papuan languages (pp. 203-230). Canberra: Pacific Linguistics Publishers.
  • Senft, G. (2008). Introduction. In G. Senft (Ed.), Serial verb constructions in Austronesian and Papuan languages (pp. 1-15). Canberra: Pacific Linguistics Publishers.
  • Senft, G. (2003). Reasoning in language. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 28-30). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877663.

    Abstract

    This project aims to investigate how speakers of various languages in indigenous cultures verbally reason about moral issues. The ways in which a solution for a moral problem is found, phrased and justified will be taken as the basis for researching reasoning processes that manifest themselves verbally in the speakers’ arguments put forward to solve a number of moral problems which will be presented to them in the form of unfinished story plots or scenarios that ask for a solution. The plots chosen attempt to present common problems in human society and human behaviour. They should function to elicit moral discussion and/or moral arguments in groups of consultants of at least three persons.
  • Senghas, A., Kita, S., & Ozyurek, A. (2008). Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. In K. A. Lindgren, D. DeLuca, & D. J. Napoli (Eds.), Signs and Voices: Deaf Culture, Identity, Language, and Arts. Washington, DC: Gallaudet University Press.
  • Senghas, A., Ozyurek, A., & Kita, S. (2003). Encoding motion events in an emerging sign language: From Nicaraguan gestures to Nicaraguan signs. In A. E. Baker, B. van den Bogaerde, & O. A. Crasborn (Eds.), Crosslinguistic perspectives in sign language research (pp. 119-130). Hamburg: Signum Press.
  • Seuren, P. A. M. (2003). Verb clusters and branching directionality in German and Dutch. In P. A. M. Seuren, & G. Kempen (Eds.), Verb Constructions in German and Dutch (pp. 247-296). Amsterdam: John Benjamins.
  • Seuren, P. A. M. (2008). Apollonius Dyscolus en de semantische syntaxis. In J. van Driel, & T. Janssen (Eds.), Ontheven aan de tijd: Linguistisch-historische studies voor Jan Noordegraaf bij zijn zestigste verjaardag (pp. 15-24). Amsterdam: Stichting Neerlandistiek VU Amsterdam.

    Abstract

    This article places the debate between Chomskyan autonomous syntax and Generative Semantics in the context of the first beginnings of syntactic theory set out in Perì suntáxeõs ('On syntax') by Apollonius Dyscolus (second century CE). It shows that, theoretically speaking, the Apollonian concept of syntax implied an algorithmically organized system of composition rules with lexico-semantic, not a sound-based, input, unlike Apollonius's strictly sound-based postulated rule systems for the composition of phonemes into syllables and of syllables into words. This meaning-based notion of syntax persisted essentially unchanged (though refined by Sanctius during the sixteenth century) until the 1930s, when structuralism began to take the notion of algorithmically organized rule systems for the generation of sentences seriously. This meant a break with the Apollonian meaning-based approach to syntax. The Generative Semantics movement, which arose during the 1960s but was nipped in the bud, implied a return to the tradition, though with much improved formal underpinnings.
  • Seuren, P. A. M. (2003). Logic, language and thought. In H. J. Ribeiro (Ed.), Encontro nacional de filosofia analítica. (pp. 259-276). Coimbra, Portugal: Faculdade de Letras.
  • Seuren, P. A. M. (2014). Scope and external datives. In B. Cornillie, C. Hamans, & D. Jaspers (Eds.), Proceedings of a mini-symposium on Pieter Seuren's 80th birthday organised at the 47th Annual Meeting of the Societas Linguistica Europaea.

    Abstract

    In this study it is argued that scope, as a property of scope‐creating operators, is a real and important element in the semantico‐grammatical description of languages. The notion of scope is illustrated and, as far as possible, defined. A first idea is given of the ‘grammar of scope’, which defines the relation between scope in the logically structured semantic analysis (SA) of sentences on the one hand and surface structure on the other. Evidence is adduced showing that peripheral preposition phrases (PPPs) in the surface structure of sentences represent scope‐creating operators in SA, and that external datives fall into this category: they are scope‐creating PPPs. It follows that, in English and Dutch, the internal dative (I gave John a book) and the external dative (I gave a book to John) are not simple syntactic variants expressing the same meaning. Instead, internal datives are an integral part of the argument structure of the matrix predicate, whereas external datives represent scope‐creating operators in SA. In the Romance languages, the (non‐pronominal) external dative has been re‐analysed as an argument type dative, but this has not happened in English and Dutch, which have many verbs that only allow for an external dative (e.g. donate, reveal). When both datives are allowed, there are systematic semantic differences, including scope differences.
  • Seuren, P. A. M. (1985). Predicate raising and semantic transparency in Mauritian Creole. In N. Boretzky, W. Enninger, & T. Stolz (Eds.), Akten des 2. Essener Kolloquiums über "Kreolsprachen und Sprachkontakte", 29-30 Nov. 1985 (pp. 203-229). Bochum: Brockmeyer.
  • Seuren, P. A. M. (2014). Universe restriction in the logic of language. In J. Hoeksema, & D. Gilbers (Eds.), Black Book: A Festschrift in Honor of Frans Zwarts (pp. 282-300). Groningen: University of Groningen.
  • Shi, R., Werker, J., & Cutler, A. (2003). Function words in early speech perception. In Proceedings of the 15th International Congress of Phonetic Sciences (pp. 3009-3012).

    Abstract

    Three experiments examined whether infants recognise functors in phrases, and whether their representations of functors are phonetically well specified. Eight- and 13- month-old English infants heard monosyllabic lexical words preceded by real functors (e.g., the, his) versus nonsense functors (e.g., kuh); the latter were minimally modified segmentally (but not prosodically) from real functors. Lexical words were constant across conditions; thus recognition of functors would appear as longer listening time to sequences with real functors. Eightmonth- olds' listening times to sequences with real versus nonsense functors did not significantly differ, suggesting that they did not recognise real functors, or functor representations lacked phonetic specification. However, 13-month-olds listened significantly longer to sequences with real functors. Thus, somewhere between 8 and 13 months of age infants learn familiar functors and represent them with segmental detail. We propose that accumulated frequency of functors in input in general passes a critical threshold during this time.
  • Shkaravska, O., Van Eekelen, M., & Tamalet, A. (2014). Collected size semantics for strict functional programs over general polymorphic lists. In U. Dal Lago, & R. Pena (Eds.), Foundational and Practical Aspects of Resource Analysis: Third International Workshop, FOPARA 2013, Bertinoro, Italy, August 29-31, 2013, Revised Selected Papers (pp. 143-159). Berlin: Springer.

    Abstract

    Size analysis can be an important part of heap consumption analysis. This paper is a part of ongoing work about typing support for checking output-on-input size dependencies for function definitions in a strict functional language. A significant restriction for our earlier results is that inner data structures (e.g. in a list of lists) all must have the same size. Here, we make a big step forwards by overcoming this limitation via the introduction of higher-order size annotations such that variate sizes of inner data structures can be expressed. In this way the analysis becomes applicable for general, polymorphic nested lists.
  • Sidnell, J., & Enfield, N. J. (2014). Deixis and the interactional foundations of reference. In Y. Huang (Ed.), The Oxford handbook of pragmatics.
  • Sidnell, J., Kockelman, P., & Enfield, N. J. (2014). Community and social life. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 481-483). Cambridge: Cambridge University Press.
  • Sidnell, J., Enfield, N. J., & Kockelman, P. (2014). Interaction and intersubjectivity. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 343-345). Cambridge: Cambridge University Press.
  • Sidnell, J., & Enfield, N. J. (2014). The ontology of action, in interaction. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 423-446). Cambridge: Cambridge University Press.
  • Skiba, R. (2003). Computer Analysis: Corpus based language research. In U. Amon, N. Dittmar, K. Mattheier, & P. Trudgil (Eds.), Handbook ''Sociolinguistics'' (2nd ed.) (pp. 1250-1260). Berlin: de Gruyter.
  • Skiba, R. (2008). Korpora in de Zweitspracherwerbsforschung: Internetzugang zu Daten des ungesteuerten Zweitspracherwerbs. In B. Ahrenholz, U. Bredel, W. Klein, M. Rost-Roth, & R. Skiba (Eds.), Empirische Forschung und Theoriebildung: Beiträge aus Soziolinguistik, Gesprochene-Sprache- und Zweitspracherwerbsforschung: Festschrift für Norbert Dittmar (pp. 21-30). Frankfurt am Main: Lang.
  • Skiba, R., Dittmar, N., & Bressem, J. (2008). Planning, collecting, exploring and archiving longitudinal L2 data: Experiences from the P-MoLL project. In L. Ortega, & H. Byrnes (Eds.), The longitudinal study of advanced L2 capacities (pp. 73-88). New York/London: Routledge.
  • Sloetjes, H. (2014). ELAN: Multimedia annotation application. In J. Durand, U. Gut, & G. Kristoffersen (Eds.), The Oxford Handbook of Corpus Phonology (pp. 305-320). Oxford: Oxford University Press.
  • Sloetjes, H., & Wittenburg, P. (2008). Annotation by category - ELAN and ISO DCR. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    The Data Category Registry is one of the ISO initiatives towards the establishment of standards for Language Resource management, creation and coding. Successful application of the DCR depends on the availability of tools that can interact with it. This paper describes the first steps that have been taken to provide users of the multimedia annotation tool ELAN, with the means to create references from tiers and annotations to data categories defined in the ISO Data Category Registry. It first gives a brief description of the capabilities of ELAN and the structure of the documents it creates. After a concise overview of the goals and current state of the ISO DCR infrastructure, a description is given of how the preliminary connectivity with the DCR is implemented in ELAN
  • De Smedt, K., Hinrichs, E., Meurers, D., Skadiņa, I., Sanford Pedersen, B., Navarretta, C., Bel, N., Lindén, K., Lopatková, M., Hajič, J., Andersen, G., & Lenkiewicz, P. (2014). CLARA: A new generation of researchers in common language resources and their applications. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 2166-2174).
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Examining strains and symptoms of the ‘Literacy Virus’: The effects of orthographic transparency on phonological processing in a connectionist model of reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    The effect of literacy on phonological processing has been described in terms of a virus that “infects all speech processing” (Frith, 1998). Empirical data has established that literacy leads to changes to the way in which phonological information is processed. Harm & Seidenberg (1999) demonstrated that a connectionist network trained to map between English orthographic and phonological representations display’s more componential phonological processing than a network trained only to stably represent the phonological forms of words. Within this study we use a similar model yet manipulate the transparency of orthographic-to-phonological mappings. We observe that networks trained on a transparent orthography are better at restoring phonetic features and phonemes. However, networks trained on non-transparent orthographies are more likely to restore corrupted phonological segments with legal, coarser linguistic units (e.g. onset, coda). Our study therefore provides an explicit description of how differences in orthographic transparency can lead to varying strains and symptoms of the ‘literacy virus’.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). A comprehensive model of spoken word recognition must be multimodal: Evidence from studies of language-mediated visual attention. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    When processing language, the cognitive system has access to information from a range of modalities (e.g. auditory, visual) to support language processing. Language mediated visual attention studies have shown sensitivity of the listener to phonological, visual, and semantic similarity when processing a word. In a computational model of language mediated visual attention, that models spoken word processing as the parallel integration of information from phonological, semantic and visual processing streams, we simulate such effects of competition within modalities. Our simulations raised untested predictions about stronger and earlier effects of visual and semantic similarity compared to phonological similarity around the rhyme of the word. Two visual world studies confirmed these predictions. The model and behavioral studies suggest that, during spoken word comprehension, multimodal information can be recruited rapidly to constrain lexical selection to the extent that phonological rhyme information may exert little influence on this process.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Modelling language – vision interactions in the hub and spoke framework. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes: Proceedings of the 13th Neural Computation and Psychology Workshop (NCPW13). (pp. 3-16). Singapore: World Scientific Publishing.

    Abstract

    Multimodal integration is a central characteristic of human cognition. However our understanding of the interaction between modalities and its influence on behaviour is still in its infancy. This paper examines the value of the Hub & Spoke framework (Plaut, 2002; Rogers et al., 2004; Dilkina et al., 2008; 2010) as a tool for exploring multimodal interaction in cognition. We present a Hub and Spoke model of language–vision information interaction and report the model’s ability to replicate a range of phonological, visual and semantic similarity word-level effects reported in the Visual World Paradigm (Cooper, 1974; Tanenhaus et al, 1995). The model provides an explicit connection between the percepts of language and the distribution of eye gaze and demonstrates the scope of the Hub-and-Spoke architectural framework by modelling new aspects of multimodal cognition.
  • De Sousa, H. (2008). The development of echo-subject markers in Southern Vanuatu. In T. J. Curnow (Ed.), Selected papers from the 2007 Conference of the Australian Linguistic Society. Australian Linguistic Society.

    Abstract

    One of the defining features of the Southern Vanuatu language family is the echo-subject (ES) marker (Lynch 2001: 177-178). Canonically, an ES marker indicates that the subject of the clause is coreferential with the subject of the preceding clause. This paper begins with a survey of the various ES systems found in Southern Vanuatu. Two prominent differences amongst the ES systems are: a) the level of obligatoriness of the ES marker; and b) the level of grammatical integration between an ES clauses and the preceding clause. The variation found amongst the ES systems reveals a clear path of grammaticalisation from the VP coordinator *ma in Proto–Southern Vanuatu to the various types of ES marker in contemporary Southern Vanuatu languages
  • Stehouwer, H., & Van den Bosch, A. (2008). Putting the t where it belongs: Solving a confusion problem in Dutch. In S. Verberne, H. Van Halteren, & P.-A. Coppen (Eds.), Computational Linguistics in the Netherlands 2007: Selected Papers from the 18th CLIN Meeting (pp. 21-36). Utrecht: LOT.

    Abstract

    A common Dutch writing error is to confuse a word ending in -d with a neighbor word ending in -dt. In this paper we describe the development of a machine-learning-based disambiguator that can determine which word ending is appropriate, on the basis of its local context. We develop alternative disambiguators, varying between a single monolithic classifier and having multiple confusable experts disambiguate between confusable pairs. Disambiguation accuracy of the best developed disambiguators exceeds 99%; when we apply these disambiguators to an external test set of collected errors, our detection strategy correctly identifies up to 79% of the errors.
  • Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.

    Abstract

    Developmental studies show that it takes longer for
    children learning spoken languages to acquire viewpointdependent
    spatial relations (e.g., left-right, front-behind),
    compared to ones that are not viewpoint-dependent (e.g.,
    in, on, under). The current study investigates how
    children learn to express viewpoint-dependent relations
    in a sign language where depicted spatial relations can be
    communicated in an analogue manner in the space in
    front of the body or by using body-anchored signs (e.g.,
    tapping the right and left hand/arm to mean left and
    right). Our results indicate that the visual-spatial
    modality might have a facilitating effect on learning to
    express these spatial relations (especially in encoding of
    left-right) in a sign language (i.e., Turkish Sign
    Language) compared to a spoken language (i.e.,
    Turkish).
  • De Swart, P., & Van Bergen, G. (2014). Unscrambling the lexical nature of weak definites. In A. Aguilar-Guevara, B. Le Bruyn, & J. Zwarts (Eds.), Weak referentiality (pp. 287-310). Amsterdam: Benjamins.

    Abstract

    We investigate how the lexical nature of weak definites influences the phenomenon of direct object scrambling in Dutch. Earlier experiments have indicated that weak definites are more resistant to scrambling than strong definites. We examine how the notion of weak definiteness used in this experimental work can be reduced to lexical connectedness. We explore four different ways of quantifying the relation between a direct object and the verb. Our results show that predictability of a verb given the object (verb cloze probability) provides the best fit to the weak/strong distinction used in the earlier experiments
  • Ten Bosch, L., Ernestus, M., & Boves, L. (2014). Comparing reaction time sequences from human participants and computational models. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 462-466).

    Abstract

    This paper addresses the question how to compare reaction times computed by a computational model of speech comprehension with observed reaction times by participants. The question is based on the observation that reaction time sequences substantially differ per participant, which raises the issue of how exactly the model is to be assessed. Part of the variation in reaction time sequences is caused by the so-called local speed: the current reaction time correlates to some extent with a number of previous reaction times, due to slowly varying variations in attention, fatigue etc. This paper proposes a method, based on time series analysis, to filter the observed reaction times in order to separate the local speed effects. Results show that after such filtering the between-participant correlations increase as well as the average correlation between participant and model increases. The presented technique provides insights into relevant aspects that are to be taken into account when comparing reaction time sequences
  • Torreira, F., Roberts, S. G., & Hammarström, H. (2014). Functional trade-off between lexical tone and intonation: Typological evidence from polar-question marking. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 100-103).

    Abstract

    Tone languages are often reported to make use of utterancelevel intonation as well as of lexical tone. We test the alternative hypotheses that a) the coexistence of lexical tone and utterance-level intonation in tone languages results in a diminished functional load for intonation, and b) that lexical tone and intonation can coexist in tone languages without undermining each other’s functional load in a substantial way. In order to do this, we collected data from two large typological databases, and performed mixed-effects and phylogenetic regression analyses controlling for genealogical and areal factors to estimate the probability of a language exhibiting grammatical devices for encoding polar questions given its status as a tonal or an intonation-only language. Our analyses indicate that, while both tone and intonational languages tend to develop grammatical devices for marking polar questions above chance level, tone languages do this at a significantly higher frequency, with estimated probabilities ranging between 0.88 and .98. This statistical bias provides cross-linguistic empirical support to the view that the use of tonal features to mark lexical contrasts leads to a diminished functional load for utterance-level intonation.
  • Torreira, F., Simonet, M., & Hualde, J. I. (2014). Quasi-neutralization of stress contrasts in Spanish. In N. Campbell, D. Gibbon, & D. Hirst (Eds.), Proceedings of Speech Prosody 2014 (pp. 197-201).

    Abstract

    We investigate the realization and discrimination of lexical stress contrasts in pitch-unaccented words in phrase-medial position in Spanish, a context in which intonational pitch accents are frequently absent. Results from production and perception experiments show that in this context durational and intensity cues to stress are produced by speakers and used by listeners above chance level. However, due to substantial amounts of phonetic overlap between stress categories in production, and of numerous errors in the identification of stress categories in perception, we suggest that, in the absence of intonational cues, Spanish speakers engaged in online language use must rely on contextual information in order to distinguish stress contrasts.
  • Trilsbeek, P., Broeder, D., Van Valkenhoef, T., & Wittenburg, P. (2008). A grid of regional language archives. In C. Calzolari (Ed.), Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008) (pp. 1474-1477). European Language Resources Association (ELRA).

    Abstract

    About two years ago, the Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, started an initiative to install regional language archives in various places around the world, particularly in places where a large number of endangered languages exist and are being documented. These digital archives make use of the LAT archiving framework [1] that the MPI has developed
    over the past nine years. This framework consists of a number of web-based tools for depositing, organizing and utilizing linguistic resources in a digital archive. The regional archives are in principle autonomous archives, but they can decide to share metadata descriptions and language resources with the MPI archive in Nijmegen and become part of a grid of linked LAT archives. By doing so, they will also take advantage of the long-term preservation strategy of the MPI archive. This paper describes the reasoning
    behind this initiative and how in practice such an archive is set up.
  • Trilsbeek, P., & Koenig, A. (2014). Increasing the future usage of endangered language archives. In D. Nathan, & P. Austin (Eds.), Language Documentation and Description vol 12 (pp. 151-163). London: SOAS. Retrieved from http://www.elpublishing.org/PID/142.
  • Trippel, T., Broeder, D., Durco, M., & Ohren, O. (2014). Towards automatic quality assessment of component metadata. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 3851-3856).

    Abstract

    Measuring the quality of metadata is only possible by assessing the quality of the underlying schema and the metadata instance. We propose some factors that are measurable automatically for metadata according to the CMD framework, taking into account the variability of schemas that can be defined in this framework. The factors include among others the number of elements, the (re-)use of reusable components, the number of filled in elements. The resulting score can serve as an indicator of the overall quality of the CMD instance, used for feedback to metadata providers or to provide an overview of the overall quality of metadata within a reposi-tory. The score is independent of specific schemas and generalizable. An overall assessment of harvested metadata is provided in form of statistical summaries and the distribution, based on a corpus of harvested metadata. The score is implemented in XQuery and can be used in tools, editors and repositories
  • Valtersson, E., & Torreira, F. (2014). Rising intonation in spontaneous French: How well can continuation statements and polar questions be distinguished? In N. Campbell, D. Gibbon, & D. Hirst (Eds.), Proceedings of Speech Prosody 2014 (pp. 785-789).

    Abstract

    This study investigates whether a clear distinction can be made between the prosody of continuation statements and polar questions in conversational French, which are both typically produced with final rising intonation. We show that the two utterance types can be distinguished over chance level by several pitch, duration, and intensity cues. However, given the substantial amount of phonetic overlap and the nature of the observed differences between the two utterance types (i.e. overall F0 scaling, final intensity drop and degree of final lengthening), we propose that variability in the phonetic detail of intonation rises in French is due to the effects of interactional factors (e.g. turn-taking context, type of speech act) rather than to the existence of two distinct rising intonation contour types in this language.
  • Van Turennout, M., Schmitt, B., & Hagoort, P. (2003). When words come to mind: Electrophysiological insights on the time course of speaking and understanding words. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 241-278). Berlin: Mouton de Gruyter.
  • van Staden, M., & Majid, A. (2003). Body colouring task 2003. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 66-68). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877666.

    Abstract

    This Field Manual entry has been superceded by the published version: Van Staden, M., & Majid, A. (2006). Body colouring task. Language Sciences, 28(2-3), 158-161. doi:10.1016/j.langsci.2005.11.004.

    Additional information

    2003_body_model_large.pdf

    Files private

    Request files
  • Van Leeuwen, T. M., Petersson, K. M., Langner, O., Rijpkema, M., & Hagoort, P. (2014). Color specificity in the human V4 complex: An fMRI repetition suppression study. In T. D. Papageorgiou, G. I. Cristopoulous, & S. M. Smirnakis (Eds.), Advanced Brain Neuroimaging Topics in Health and Disease - Methods and Applications (pp. 275-295). Rijeka, Croatia: Intech. doi:10.5772/58278.
  • Van Uytvanck, D., Dukers, A., Ringersma, J., & Trilsbeek, P. (2008). Language-sites: Accessing and presenting language resources via geographic information systems. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, & D. Tapias (Eds.), Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008). Paris: European Language Resources Association (ELRA).

    Abstract

    The emerging area of Geographic Information Systems (GIS) has proven to add an interesting dimension to many research projects. Within the language-sites initiative we have brought together a broad range of links to digital language corpora and resources. Via Google Earth's visually appealing 3D-interface users can spin the globe, zoom into an area they are interested in and access directly the relevant language resources. This paper focuses on several ways of relating the map and the online data (lexica, annotations, multimedia recordings, etc.). Furthermore, we discuss some of the implementation choices that have been made, including future challenges. In addition, we show how scholars (both linguists and anthropologists) are using GIS tools to fulfill their specific research needs by making use of practical examples. This illustrates how both scientists and the general public can benefit from geography-based access to digital language data
  • Van Wijk, C., & Kempen, G. (1985). From sentence structure to intonation contour: An algorithm for computing pitch contours on the basis of sentence accents and syntactic structure. In B. Müller (Ed.), Sprachsynthese: Zur Synthese von natürlich gesprochener Sprache aus Texten und Konzepten (pp. 157-182). Hildesheim: Georg Olms.
  • Van Valin Jr., R. D. (2003). Minimalism and explanation. In J. Moore, & M. Polinsky (Eds.), The nature of explanation in linguistic theory (pp. 281-297). University of Chicago Press.
  • Van Putten, S. (2014). Left-dislocation and subordination in Avatime (Kwa). In R. Van Gijn, J. Hammond, D. Matic, S. van Putten, & A.-V. Galucio (Eds.), Information Structure and Reference Tracking in Complex Sentences. (pp. 71-98). Amsterdam: John Benjamins.

    Abstract

    Left dislocation is characterized by a sentence-initial element which is crossreferenced in the remainder of the sentence, and often set off by an intonation break. Because of these properties, left dislocation has been analyzed as an extraclausal phenomenon. Whether or not left dislocation can occur within subordinate clauses has been a matter of debate in the literature, but has never been checked against corpus data. This paper presents data from Avatime, a Kwa (Niger-Congo) language spoken in Ghana, showing that left dislocation occurs within subordinate clauses in spontaneous discourse. This poses a problem for the extraclausal analysis of left dislocation. I show that this problem can best be solved by assuming that Avatime allows the embedding of units larger than a clause
  • Van Valin Jr., R. D., & Mairal Usón, R. (2014). Interfacing the lexicon and an ontology in a linking system. In M. d. l. Á. Gómez González, F. J. Ruiz de Mendoza Ibáñez, & F. Gonzálvez-García (Eds.), Theory and practice in functional-cognitive space (pp. 205-228). Amsterdam: Benjamins.

    Abstract

    The aim of this paper is to discuss the repercussions of a conceptual orientation on two crucial parts of the Role and Reference Grammar (RRG) linking algorithm, that is, semantic representation and constructional schemas. Firstly, it is argued that adopting FunGramKB’s notion of conceptual logical structure (CLS) over standard RRG logical structures (LSs) has numerous advantages since meaning has now access to conceptual knowledge and therefore a CLS provides a format that goes beyond those aspects that are syntactically visible. The second part introduces the notion of the grammaticon, the component where constructional schemas actually reside. RRG constructional schemas are analyzed within a conceptual framework like that provided in FunGramKB. In essence, it is shown that a conceptual orientation to the RRG linking system by the addition of CLSs enriches the semantic representations in it substantially
  • Van Valin Jr., R. D. (2008). Some remarks on universal grammar. In J. Guo, E. Lieven, N. Budwig, S. Ervin-Tripp, K. Nakamura, & S. Ozcaliskan (Eds.), Crosslinguistic approaches to the psychology of language: Research in the tradition of Dan Isaac Slobin (pp. 311-320). New York: Psychology Press.
  • Van Valin Jr., R. D. (2014). Role and Reference Grammar. In A. Carnie, Y. Sato, & D. Siddiqi (Eds.), Routledge handbook of syntax (pp. 579-603). London: Routledge.
  • Van Valin Jr., R. D. (2008). RPs and the nature of lexical and syntactic categories in role and reference grammar. In R. D. Van Valin Jr. (Ed.), Investigations of the syntax-semantics-pragmatics interface (pp. 161-178). Amsterdam: Benjamins.
  • Van Gijn, R. (2014). Yurakaré. In M. Crevels, & P. C. Muysken (Eds.), Las lenguas de Bolivia. Vol. 3: Oriente (pp. 135-174). La Paz: Plural Editores.
  • Váradi, T., Wittenburg, P., Krauwer, S., Wynne, M., & Koskenniemi, K. (2008). CLARIN: Common language resources and technology infrastructure. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008).

    Abstract

    This paper gives an overview of the CLARIN project [1], which aims to create a research infrastructure that makes language resources and technology (LRT) available and readily usable to scholars of all disciplines, in particular the humanities and social sciences (HSS).
  • Verkerk, A., & Lestrade, S. (2008). The encoding of adjectives. In M. Van Koppen, & B. Botma (Eds.), Linguistics in the Netherlands 2008 (pp. 157-168). Amsterdam: Benjamins.

    Abstract

    In this paper, we will give a unified account of the cross-linguistic variation in the encoding of adjectives in predicative and attributive constructions. Languages may differ in the encoding strategy of adjectives in the predicative domain (Stassen 1997), and sometimes change this strategy in the attributive domain (Verkerk 2007). We will show that the interaction of two principles, that of faithfulness to the semantic class of a lexical root and that of faithfulness to discourse functions, can account for all attested variation in the encoding of adjectives.
  • Verkerk, A. (2014). Where Alice fell into: Motion events from a parallel corpus. In B. Szmrecsanyi, & B. Wälchli (Eds.), Aggregating dialectology, typology, and register analysis: Linguistic variation in text and speech (pp. 324-354). Berlin: De Gruyter.
  • Von Stutterheim, C., Carroll, M., & Klein, W. (2003). Two ways of construing complex temporal structures. In F. Lenz (Ed.), Deictic conceptualization of space, time and person (pp. 97-133). Amsterdam: Benjamins.
  • Vonk, W., & Cozijn, R. (2003). On the treatment of saccades and regressions in eye movement measures of reading time. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind's eye: Cognitive and applied aspects of eye movement research (pp. 291-312). Amsterdam: Elsevier.
  • Vosse, T. G., & Kempen, G. (2008). Parsing verb-final clauses in German: Garden-path and ERP effects modeled by a parallel dynamic parser. In B. Love, K. McRae, & V. Sloutsky (Eds.), Proceedings of the 30th Annual Conference on the Cognitive Science Society (pp. 261-266). Washington: Cognitive Science Society.

    Abstract

    Experimental sentence comprehension studies have shown that superficially similar German clauses with verb-final word order elicit very different garden-path and ERP effects. We show that a computer implementation of the Unification Space parser (Vosse & Kempen, 2000) in the form of a localist-connectionist network can model the observed differences, at least qualitatively. The model embodies a parallel dynamic parser that, in contrast with existing models, does not distinguish between consecutive first-pass and reanalysis stages, and does not use semantic or thematic roles. It does use structural frequency data and animacy information.
  • Wagner, A., & Braun, A. (2003). Is voice quality language-dependent? Acoustic analyses based on speakers of three different languages. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 651-654). Adelaide: Causal Productions.
  • Warner, N. (2003). Rapid perceptibility as a factor underlying universals of vowel inventories. In A. Carnie, H. Harley, & M. Willie (Eds.), Formal approaches to function in grammar, in honor of Eloise Jelinek (pp. 245-261). Amsterdam: Benjamins.
  • Weber, A., & Smits, R. (2003). Consonant and vowel confusion patterns by American English listeners. In M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences.

    Abstract

    This study investigated the perception of American English phonemes by native listeners. Listeners identified either the consonant or the vowel in all possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (0 dB, 8 dB, and 16 dB). Effects of syllable position, signal-to-noise ratio, and articulatory features on vowel and consonant identification are discussed. The results constitute the largest source of data that is currently available on phoneme confusion patterns of American English phonemes by native listeners.
  • Weber, A., & Smits, R. (2003). Consonant and vowel confusion patterns by American English listeners. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 1437-1440). Adelaide: Causal Productions.

    Abstract

    This study investigated the perception of American English phonemes by native listeners. Listeners identified either the consonant or the vowel in all possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signalto-noise ratios (0 dB, 8 dB, and 16 dB). Effects of syllable position, signal-to-noise ratio, and articulatory features on vowel and consonant identification are discussed. The results constitute the largest source of data that is currently available on phoneme confusion patterns of American English phonemes by native listeners.
  • Weber, A., & Melinger, A. (2008). Name dominance in spoken word recognition is (not) modulated by expectations: Evidence from synonyms. In A. Botinis (Ed.), Proceedings of ISCA Tutorial and Research Workshop On Experimental Linguistics (ExLing 2008) (pp. 225-228). Athens: University of Athens.

    Abstract

    Two German eye-tracking experiments tested whether top-down expectations interact with acoustically-driven word-recognition processes. Competitor objects with two synonymous names were paired with target objects whose names shared word onsets with either the dominant or the non-dominant name of the competitor. Non-dominant names of competitor objects were either introduced before the test session or not. Eye-movements were monitored while participants heard instructions to click on target objects. Results demonstrate dominant and non-dominant competitor names were considered for recognition, regardless of top-down expectations, though dominant names were always activated more strongly.
  • Weber, A. (2008). What eye movements can tell us about spoken-language processing: A psycholinguistic survey. In C. M. Riehl (Ed.), Was ist linguistische Evidenz: Kolloquium des Zentrums Sprachenvielfalt und Mehrsprachigkeit, November 2006 (pp. 57-68). Aachen: Shaker.
  • Weber, A. (2008). What the eyes can tell us about spoken-language comprehension [Abstract]. Journal of the Acoustical Society of America, 124, 2474-2474.

    Abstract

    Lexical recognition is typically slower in L2 than in L1. Part of the difficulty comes from a not precise enough processing of L2 phonemes. Consequently, L2 listeners fail to eliminate candidate words that L1 listeners can exclude from competing for recognition. For instance, the inability to distinguish /r/ from /l/ in rocket and locker makes for Japanese listeners both words possible candidates when hearing their onset (e.g., Cutler, Weber, and Otake, 2006). The L2 disadvantage can, however, be dispelled: For L2 listeners, but not L1 listeners, L2 speech from a non-native talker with the same language background is known to be as intelligible as L2 speech from a native talker (e.g., Bent and Bradlow, 2003). A reason for this may be that L2 listeners have ample experience with segmental deviations that are characteristic for their own accent. On this account, only phonemic deviations that are typical for the listeners’ own accent will cause spurious lexical activation in L2 listening (e.g., English magic pronounced as megic for Dutch listeners). In this talk, I will present evidence from cross-modal priming studies with a variety of L2 listener groups, showing how the processing of phonemic deviations is accent-specific but withstands fine phonetic differences.
  • Wender, K. F., Haun, D. B. M., Rasch, B. H., & Blümke, M. (2003). Context effects in memory for routes. In C. Freksa, W. Brauer, C. Habel, & K. F. Wender (Eds.), Spatial cognition III: Routes and navigation, human memory and learning, spatial representation and spatial learning (pp. 209-231). Berlin: Springer.
  • Widlok, T., Rapold, C. J., & Hoymann, G. (2008). Multimedia analysis in documentation projects: Kinship, interrogatives and reciprocals in ǂAkhoe Haiǁom. In K. D. Harrison, D. S. Rood, & A. Dwyer (Eds.), Lessons from documented endangered languages (pp. 355-370). Amsterdam: Benjamins.

    Abstract

    This contribution emphasizes the role of multimedia data not only for archiving languages but also for creating opportunities for innovative analyses. In the case at hand, video material was collected as part of the documentation of Akhoe Haiom, a Khoisan language spoken in northern Namibia. The multimedia documentation project brought together linguistic and anthropological work to highlight connections between specialized domains, namely kinship terminology, interrogatives and reciprocals. These connections would have gone unnoticed or undocumented in more conventional modes of language description. It is suggested that such an approach may be particularly appropriate for the documentation of endangered languages since it directs the focus of attention away from isolated traits of languages towards more complex practices of communication that are also frequently threatened with extinction.
  • Widlok, T. (2008). The dilemmas of walking: A comparative view. In T. Ingold, & J. L. Vergunst (Eds.), Ways of walking: Ethnography and practice on foot (pp. 51-66). Aldershot: Ashgate.
  • Wilson, J. J., & Little, H. (2014). Emerging languages in Esoteric and Exoteric Niches: evidence from Rural Sign Languages. In Ways to Potolanguage 3 book of abstracts (pp. 54-55).
  • Windhouwer, M., Petro, J., & Shayan, S. (2014). RELISH LMF: Unlocking the full power of the lexical markup framework. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 1032-1037).
  • Wittenburg, P., Trilsbeek, P., & Wittenburg, F. (2014). Corpus archiving and dissemination. In J. Durand, U. Gut, & G. Kristoffersen (Eds.), The Oxford Handbook of Corpus Phonology (pp. 133-149). Oxford: Oxford University Press.

Share this page