Publications

Displaying 201 - 300 of 300
  • O'Connor, L. (2007). Motion, transfer, and transformation: The grammar of change in Lowland Chontal. Amsterdam: Benjamins.

    Abstract

    Typologies are critical tools for linguists, but typologies, like grammars, are known to leak. This book addresses the question of typological overlap from the perspective of a single language. In Lowland Chontal of Oaxaca, a language of southern Mexico, change events are expressed with three types of predicates, and each predicate type corresponds to a different language type in the well-known typology of lexicalization patterns established by Talmy and elaborated by others. O’Connor evaluates the predictive powers of the typology by examining the consequences of each predicate type in a variety of contexts, using data from narrative discourse, stimulus response, and elicitation. This is the first de­tailed look at the lexical and grammatical resources of the verbal system in Chontal and their relation to semantics of change. The analysis of how and why Chontal speakers choose among these verbal resources to achieve particular communicative and social goals serves both as a documentation of an endangered language and a theoretical contribution towards a typology of language use.
  • Offenga, F., Broeder, D., Wittenburg, P., Ducret, J., & Romary, L. (2006). Metadata profile in the ISO data category registry. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 1866-1869).
  • Omar, R., Henley, S. M., Hailstone, J. C., Sauter, D., Scott, S. K., Fox, N. C., Rossor, M. N., & Warren, J. D. (2007). Recognition of emotions in faces, voices and music in frontotemporal lobar regeneration [Abstract]. Journal of Neurology, Neurosurgery & Psychiatry, 78(9), 1014.

    Abstract

    Frontotemporal lobar degeneration (FTLD) is a group of neurodegenerative conditions characterised by focal frontal and/or temporal lobe atrophy. Patients develop a range of cognitive and behavioural abnormalities, including prominent difficulties in comprehending and expressing emotions, with significant clinical and social consequences. Here we report a systematic prospective analysis of emotion processing in different input modalities in patients with FTLD. We examined recognition of happiness, sadness, fear and anger in facial expressions, non-verbal vocalisations and music in patients with FTLD and in healthy age matched controls. The FTLD group was significantly impaired in all modalities compared with controls, and this effect was most marked for music. Analysing each emotion separately, recognition of negative emotions was impaired in all three modalities in FTLD, and this effect was most marked for fear and anger. Recognition of happiness was deficient only with music. Our findings support the idea that FTLD causes impaired recognition of emotions across input channels, consistent with a common central representation of emotion concepts. Music may be a sensitive probe of emotional deficits in FTLD, perhaps because it requires a more abstract representation of emotion than do animate stimuli such as faces and voices.
  • Otake, T., Davis, S. M., & Cutler, A. (1995). Listeners’ representations of within-word structure: A cross-linguistic and cross-dialectal investigation. In J. Pardo (Ed.), Proceedings of EUROSPEECH 95: Vol. 3 (pp. 1703-1706). Madrid: European Speech Communication Association.

    Abstract

    Japanese, British English and American English listeners were presented with spoken words in their native language, and asked to mark on a written transcript of each word the first natural division point in the word. The results showed clear and strong patterns of consensus, indicating that listeners have available to them conscious representations of within-word structure. Orthography did not play a strongly deciding role in the results. The patterns of response were at variance with results from on-line studies of speech segmentation, suggesting that the present task taps not those representations used in on-line listening, but levels of representation which may involve much richer knowledge of word-internal structure.
  • Ozyurek, A. (2011). Language in our hands: The role of the body in language, cognition and communication [Inaugural lecture]. Nijmegen: Radboud University Nijmegen.

    Abstract

    Even though most studies of language have focused on speech channel and/or viewed language as an
    amodal abstract system, there is growing evidence on the role our bodily actions/ perceptions play in language and communication.
    In this context, Özyürek discusses what our meaningful visible bodily actions reveal about our language capacity. Conducting cross-linguistic, behavioral, and neurobiological research,
    she shows that co-speech gestures reflect the imagistic, iconic aspects of events talked about and at the same time interact with language production and
    comprehension processes. Sign languages can also be characterized having an abstract system of linguistic categories as well as using iconicity in several
    aspects of the language structure and in its processing.
    Studying language multimodally reveals how grounded language is in our visible bodily actions and opens
    up new lines of research to study language in its situated,
    natural face-to-face context.
  • Pallier, C., Cutler, A., & Sebastian-Galles, N. (1997). Prosodic structure and phonetic processing: A cross-linguistic study. In Proceedings of EUROSPEECH 97 (pp. 2131-2134). Grenoble, France: ESCA.

    Abstract

    Dutch and Spanish differ in how predictable the stress pattern is as a function of the segmental content: it is correlated with syllable weight in Dutch but not in Spanish. In the present study, two experiments were run to compare the abilities of Dutch and Spanish speakers to separately process segmental and stress information. It was predicted that the Spanish speakers would have more difficulty focusing on the segments and ignoring the stress pattern than the Dutch speakers. The task was a speeded classification task on CVCV syllables, with blocks of trials in which the stress pattern could vary versus blocks in which it was fixed. First, we found interference due to stress variability in both languages, suggesting that the processing of segmental information cannot be performed independently of stress. Second, the effect was larger for Spanish than for Dutch, suggesting that that the degree of interference from stress variation may be partially mitigated by the predictability of stress placement in the language.
  • Papafragou, A., & Ozturk, O. (2007). Children's acquisition of modality. In Proceedings of the 2nd Conference on Generative Approaches to Language Acquisition North America (GALANA 2) (pp. 320-327). Somerville, Mass.: Cascadilla Press.
  • Papafragou, A. (2007). On the acquisition of modality. In T. Scheffler, & L. Mayol (Eds.), Penn Working Papers in Linguistics. Proceedings of the 30th Annual Penn Linguistics Colloquium (pp. 281-293). Department of Linguistics, University of Pennsylvania.
  • Papafragou, A., & Ozturk, O. (2006). The acquisition of epistemic modality. In A. Botinis (Ed.), Proceedings of ITRW on Experimental Linguistics in ExLing-2006 (pp. 201-204). ISCA Archive.

    Abstract

    In this paper we try to contribute to the body of knowledge about the acquisition of English epistemic modal verbs (e.g. Mary may/has to be at school). Semantically, these verbs encode possibility or necessity with respect to available evidence. Pragmatically, the use of epistemic modals often gives rise to scalar conversational inferences (Mary may be at school -> Mary doesn’t have to be at school). The acquisition of epistemic modals is challenging for children on both these levels. In this paper, we present findings from two studies which were conducted with 5-year-old children and adults. Our findings, unlike previous work, show that 5-yr-olds have mastered epistemic modal semantics, including the notions of necessity and possibility. However, they are still in the process of acquiring epistemic modal pragmatics.
  • Parhammer*, S. I., Ebersberg*, M., Tippmann*, J., Stärk*, K., Opitz, A., Hinger, B., & Rossi, S. (2019). The influence of distraction on speech processing: How selective is selective attention? In Proceedings of Interspeech 2019 (pp. 3093-3097). doi:10.21437/Interspeech.2019-2699.

    Abstract

    -* indicates shared first authorship -
    The present study investigated the effects of selective attention on the processing of morphosyntactic errors in unattended parts of speech. Two groups of German native (L1) speakers participated in the present study. Participants listened to sentences in which irregular verbs were manipulated in three different conditions (correct, incorrect but attested ablaut pattern, incorrect and crosslinguistically unattested ablaut pattern). In order to track fast dynamic neural reactions to the stimuli, electroencephalography was used. After each sentence, participants in Experiment 1 performed a semantic judgement task, which deliberately distracted the participants from the syntactic manipulations and directed their attention to the semantic content of the sentence. In Experiment 2, participants carried out a syntactic judgement task, which put their attention on the critical stimuli. The use of two different attentional tasks allowed for investigating the impact of selective attention on speech processing and whether morphosyntactic processing steps are performed automatically. In Experiment 2, the incorrect attested condition elicited a larger N400 component compared to the correct condition, whereas in Experiment 1 no differences between conditions were found. These results suggest that the processing of morphosyntactic violations in irregular verbs is not entirely automatic but seems to be strongly affected by selective attention.
  • Pereiro Estevan, Y., Wan, V., Scharenborg, O., & Gallardo Antolín, A. (2006). Segmentación de fonemas no supervisada basada en métodos kernel de máximo margen. In Proceedings of IV Jornadas en Tecnología del Habla.

    Abstract

    En este artículo se desarrolla un método automático de segmentación de fonemas no supervisado. Este método utiliza el algoritmo de agrupación de máximo margen [1] para realizar segmentación de fonemas sobre habla continua sin necesidad de información a priori para el entrenamiento del sistema.
  • Perniss, P. M., Pfau, R., & Steinbach, M. (Eds.). (2007). Visible variation: Cross-linguistic studies in sign language structure. Berlin: Mouton de Gruyter.

    Abstract

    It has been argued that properties of the visual-gestural modality impose a homogenizing effect on sign languages, leading to less structural variation in sign language structure as compared to spoken language structure. However, until recently, research on sign languages was limited to a number of (Western) sign languages. Before we can truly answer the question of whether modality effects do indeed cause less structural variation, it is necessary to investigate the similarities and differences that exist between sign languages in more detail and, especially, to include in this investigation less studied sign languages. The current research climate is testimony to a surge of interest in the study of a geographically more diverse range of sign languages. The volume reflects that climate and brings together work by scholars engaging in comparative sign linguistics research. The 11 articles discuss data from many different signed and spoken languages and cover a wide range of topics from different areas of grammar including phonology (word pictures), morphology (pronouns, negation, and auxiliaries), syntax (word order, interrogative clauses, auxiliaries, negation, and referential shift) and pragmatics (modal meaning and referential shift). In addition to this, the contributions address psycholinguistic issues, aspects of language change, and issues concerning data collection in sign languages, thereby providing methodological guidelines for further research. Although some papers use a specific theoretical framework for analyzing the data, the volume clearly focuses on empirical and descriptive aspects of sign language variation.
  • Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2011). Does space structure spatial language? Linguistic encoding of space in sign languages. In L. Carlson, C. Holscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 1595-1600). Austin, TX: Cognitive Science Society.
  • Pluymaekers, M., Ernestus, M., Baayen, R. H., & Booij, G. (2006). The role of morphology in fine phonetic detail: The case of Dutch -igheid. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 53-54).
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2006). Effects of word frequency on the acoustic durations of affixes. In Proceedings of Interspeech 2006 (pp. 953-956). Pittsburgh: ICSLP.

    Abstract

    This study investigates whether the acoustic durations of derivational affixes in Dutch are affected by the frequency of the word they occur in. In a word naming experiment, subjects were presented with a large number of words containing one of the affixes ge-, ver-, ont, or -lijk. Their responses were recorded on DAT tapes, and the durations of the affixes were measured using Automatic Speech Recognition technology. To investigate whether frequency also affected durations when speech rate was high, the presentation rate of the stimuli was varied. The results show that a higher frequency of the word as a whole led to shorter acoustic realizations for all affixes. Furthermore, affixes became shorter as the presentation rate of the stimuli increased. There was no interaction between word frequency and presentation rate, suggesting that the frequency effect also applies in situations in which the speed of articulation is very high.
  • Poellmann, K., McQueen, J. M., & Mitterer, H. (2011). The time course of perceptual learning. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 1618-1621). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    Two groups of participants were trained to perceive an ambiguous sound [s/f] as either /s/ or /f/ based on lexical bias: One group heard the ambiguous fricative in /s/-final words, the other in /f/-final words. This kind of exposure leads to a recalibration of the /s/-/f/ contrast [e.g., 4]. In order to investigate when and how this recalibration emerges, test trials were interspersed among training and filler trials. The learning effect needed at least 10 clear training items to arise. Its emergence seemed to occur in a rather step-wise fashion. Learning did not improve much after it first appeared. It is likely, however, that the early test trials attracted participants' attention and therefore may have interfered with the learning process.
  • Poletiek, F. H., & Chater, N. (2006). Grammar induction profits from representative stimulus sampling. In R. Sun (Ed.), Proceedings of the 28th Annual Conference of the Cognitive Science Society (CogSci 2006) (pp. 1968-1973). Austin, TX, USA: Cognitive Science Society.
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2019). Acoustic specification of upper limb movement in voicing. In A. Grimminger (Ed.), Proceedings of the 6th Gesture and Speech in Interaction – GESPIN 6 (pp. 68-74). Paderborn: Universitaetsbibliothek Paderborn. doi:10.17619/UNIPB/1-812.
  • Pouw, W., & Dixon, J. A. (2019). Quantifying gesture-speech synchrony. In A. Grimminger (Ed.), Proceedings of the 6th Gesture and Speech in Interaction – GESPIN 6 (pp. 75-80). Paderborn: Universitaetsbibliothek Paderborn. doi:10.17619/UNIPB/1-812.

    Abstract

    Spontaneously occurring speech is often seamlessly accompanied by hand gestures. Detailed
    observations of video data suggest that speech and gesture are tightly synchronized in time,
    consistent with a dynamic interplay between body and mind. However, spontaneous gesturespeech
    synchrony has rarely been objectively quantified beyond analyses of video data, which
    do not allow for identification of kinematic properties of gestures. Consequently, the point in
    gesture which is held to couple with speech, the so-called moment of “maximum effort”, has
    been variably equated with the peak velocity, peak acceleration, peak deceleration, or the onset
    of the gesture. In the current exploratory report, we provide novel evidence from motiontracking
    and acoustic data that peak velocity is closely aligned, and shortly leads, the peak pitch
    (F0) of speech

    Additional information

    https://osf.io/9843h/
  • Rai, N. K., Rai, M., Paudyal, N. P., Schikowski, R., Bickel, B., Stoll, S., Gaenszle, M., Banjade, G., Rai, I. P., Bhatta, T. N., Sauppe, S., Rai, R. M., Rai, J. K., Rai, L. K., Rai, D. B., Rai, G., Rai, D., Rai, D. K., Rai, A., Rai, C. K. and 4 moreRai, N. K., Rai, M., Paudyal, N. P., Schikowski, R., Bickel, B., Stoll, S., Gaenszle, M., Banjade, G., Rai, I. P., Bhatta, T. N., Sauppe, S., Rai, R. M., Rai, J. K., Rai, L. K., Rai, D. B., Rai, G., Rai, D., Rai, D. K., Rai, A., Rai, C. K., Rai, S. M., Rai, R. K., Pettigrew, J., & Dirksmeyer, T. (2011). छिन्ताङ शब्दकोश तथा व्याकरण [Chintang Dictionary and Grammar]. Kathmandu, Nepal: Chintang Language Research Program.
  • Rapold, C. J. (2007). From demonstratives to verb agreement in Benchnon: A diachronic perspective. In A. Amha, M. Mous, & G. Savà (Eds.), Omotic and Cushitic studies: Papers from the Fourth Cushitic Omotic Conference, Leiden, 10-12 April 2003 (pp. 69-88). Cologne: Rüdiger Köppe.
  • Regier, T., Khetarpal, N., & Majid, A. (2011). Inferring conceptual structure from cross-language data. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 1488). Austin, TX: Cognitive Science Society.
  • Reinisch, E., & Weber, A. (2011). Adapting to lexical stress in a foreign accent. In W.-S. Lee, & E. Zee (Eds.), Proceedings of the 17th International Congress of Phonetic Sciences 2011 [ICPhS XVII] (pp. 1678-1681). Hong Kong: Department of Chinese, Translation and Linguistics, City University of Hong Kong.

    Abstract

    An exposure-test paradigm was used to examine whether Dutch listeners can adapt their perception to non-canonical marking of lexical stress in Hungarian-accented Dutch. During exposure, one group of listeners heard only words with correct initial stress, while another group also heard examples of unstressed initial syllables that were marked by high pitch, a possible stress cue in Dutch. Subsequently, listeners’ eye movements to target-competitor pairs with segmental overlap but different stress patterns were tracked while hearing Hungarian-accented Dutch. Listeners who had heard non-canonically produced words previously distinguished target-competitor pairs faster than listeners who had only been exposed to canonical forms before. This suggests that listeners can adapt quickly to speaker-specific realizations of non-canonical lexical stress.
  • Reinisch, E., Weber, A., & Mitterer, H. (2011). Listeners retune phoneme boundaries across languages [Abstract]. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2572-2572.

    Abstract

    Listeners can flexibly retune category boundaries of their native language to adapt to non-canonically produced phonemes. This only occurs, however, if the pronunciation peculiarities can be attributed to stable and not transient speaker-specific characteristics. Listening to someone speaking a second language, listeners could attribute non-canonical pronunciations either to the speaker or to the fact that she is modifying her categories in the second language. We investigated whether, following exposure to Dutch-accented English, Dutch listeners show effects of category retuning during test where they hear the same speaker speaking her native language, Dutch. Exposure was a lexical-decision task where either word-final [f] or [s] was replaced by an ambiguous sound. At test listeners categorized minimal word pairs ending in sounds along an [f]-[s] continuum. Following exposure to English words, Dutch listeners showed boundary shifts of a similar magnitude as following exposure to the same phoneme variants in their native language. This suggests that production patterns in a second language are deemed a stable characteristic. A second experiment suggests that category retuning also occurs when listeners are exposed to and tested with a native speaker of their second language. Listeners thus retune phoneme boundaries across languages.
  • Ringersma, J., & Kemps-Snijders, M. (2007). Creating multimedia dictionaries of endangered languages using LEXUS. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 65-68). Baixas, France: ISCA-Int.Speech Communication Assoc.

    Abstract

    This paper reports on the development of a flexible web based lexicon tool, LEXUS. LEXUS is targeted at linguists involved in language documentation (of endangered languages). It allows the creation of lexica within the structure of the proposed ISO LMF standard and uses the proposed concept naming conventions from the ISO data categories, thus enabling interoperability, search and merging. LEXUS also offers the possibility to visualize language, since it provides functionalities to include audio, video and still images to the lexicon. With LEXUS it is possible to create semantic network knowledge bases, using typed relations. The LEXUS tool is free for use. Index Terms: lexicon, web based application, endangered languages, language documentation.
  • Rissman, L., & Majid, A. (2019). Agency drives category structure in instrumental events. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2661-2667). Montreal, QB: Cognitive Science Society.

    Abstract

    Thematic roles such as Agent and Instrument have a long-standing place in theories of event representation. Nonetheless, the structure of these categories has been difficult to determine. We investigated how instrumental events, such as someone slicing bread with a knife, are categorized in English. Speakers described a variety of typical and atypical instrumental events, and we determined the similarity structure of their descriptions using correspondence analysis. We found that events where the instrument is an extension of an intentional agent were most likely to elicit similar language, highlighting the importance of agency in structuring instrumental categories.
  • Roberts, L., Gürel, A., Tatar, S., & Marti, L. (Eds.). (2007). EUROSLA Yearbook 7. Amsterdam: Benjamins.

    Abstract

    The annual conference of the European Second Language Association provides an opportunity for the presentation of second language research with a genuinely European flavour. The theoretical perspectives adopted are wide-ranging and may fall within traditions overlooked elsewhere. Moreover, the studies presented are largely multi-lingual and cross-cultural, as befits the make-up of modern-day Europe. At the same time, the work demonstrates sophisticated awareness of scholarly insights from around the world. The EUROSLA yearbook presents a selection each year of the very best research from the annual conference. Submissions are reviewed and professionally edited, and only those of the highest quality are selected. Contributions are in English.
  • Roberts, L., Gabriele, P., & Camilla, B. (Eds.). (2011). EUROSLA Yearbook 2011. Amsterdam: John Benjamins.

    Abstract

    The annual conference of the European Second Language Association provides an opportunity for the presentation of second language research with a genuinely European flavour. The theoretical perspectives adopted are wide-ranging and may fall within traditions overlooked elsewhere. Moreover, the studies presented are largely multi-lingual and cross-cultural, as befits the make-up of modern-day Europe. At the same time, the work demonstrates sophisticated awareness of scholarly insights from around the world. The EUROSLA yearbook presents a selection each year of the very best research from the annual conference. Submissions are reviewed and professionally edited, and only those of the highest quality are selected. Contributions are in English.
  • Rösler, D., & Skiba, R. (1987). Eine Datenbank für den Sprachunterricht: Ein Lehrmaterial-Steinbruch für Deutsch als Zweitsprache. Mainz: Werkmeister.
  • De Ruiter, J. P. (2007). Some multimodal signals in humans. In I. Van de Sluis, M. Theune, E. Reiter, & E. Krahmer (Eds.), Proceedings of the Workshop on Multimodal Output Generation (MOG 2007) (pp. 141-148).

    Abstract

    In this paper, I will give an overview of some well-studied multimodal signals that humans produce while they communicate with other humans, and discuss the implications of those studies for HCI. I will first discuss a conceptual framework that allows us to distinguish between functional and sensory modalities. This distinction is important, as there are multiple functional modalities using the same sensory modality (e.g., facial expression and eye-gaze in the visual modality). A second theoretically important issue is redundancy. Some signals appear to be redundant with a signal in another modality, whereas others give new information or even appear to give conflicting information (see e.g., the work of Susan Goldin-Meadows on speech accompanying gestures). I will argue that multimodal signals are never truly redundant. First, many gestures that appear at first sight to express the same meaning as the accompanying speech generally provide extra (analog) information about manner, path, etc. Second, the simple fact that the same information is expressed in more than one modality is itself a communicative signal. Armed with this conceptual background, I will then proceed to give an overview of some multimodalsignals that have been investigated in human-human research, and the level of understanding we have of the meaning of those signals. The latter issue is especially important for potential implementations of these signals in artificial agents. First, I will discuss pointing gestures. I will address the issue of the timing of pointing gestures relative to the speech it is supposed to support, the mutual dependency between pointing gestures and speech, and discuss the existence of alternative ways of pointing from other cultures. The most frequent form of pointing that does not involve the index finger is a cultural practice called lip-pointing which employs two visual functional modalities, mouth-shape and eye-gaze, simultaneously for pointing. Next, I will address the issue of eye-gaze. A classical study by Kendon (1967) claims that there is a systematic relationship between eye-gaze (at the interlocutor) and turn-taking states. Research at our institute has shown that this relationship is weaker than has often been assumed. If the dialogue setting contains a visible object that is relevant to the dialogue (e.g., a map), the rate of eye-gaze-at-other drops dramatically and its relationship to turn taking disappears completely. The implications for machine generated eye-gaze are discussed. Finally, I will explore a theoretical debate regarding spontaneous gestures. It has often been claimed that the class of gestures that is called iconic by McNeill (1992) are a “window into the mind”. That is, they are claimed to give the researcher (or even the interlocutor) a direct view into the speaker’s thought, without being obscured by the complex transformation that take place when transforming a thought into a verbal utterance. I will argue that this is an illusion. Gestures can be shown to be specifically designed such that the listener can be expected to interpret them. Although the transformations carried out to express a thought in gesture are indeed (partly) different from the corresponding transformations for speech, they are a) complex, and b) severely understudied. This obviously has consequences both for the gesture research agenda, and for the generation of iconic gestures by machines.
  • De Ruiter, J. P., & Enfield, N. J. (2007). The BIC model: A blueprint for the communicator. In C. Stephanidis (Ed.), Universal access in Human-Computer Interaction: Applications and services (pp. 251-258). Berlin: Springer.
  • Sadakata, M., & McQueen, J. M. (2011). The role of variability in non-native perceptual learning of a Japanese geminate-singleton fricative contrast. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 873-876).

    Abstract

    The current study reports the enhancing effect of a high variability training procedure in the learning of a Japanese geminate-singleton fricative contrast. Dutch natives took part in a five-day training procedure in which they identified geminate and singleton variants of the Japanese fricative /s/. They heard either many repetitions of a limited set of words recorded by a single speaker (simple training) or fewer repetitions of a more variable set of words recorded by multiple speakers (variable training). Pre-post identification evaluations and a transfer test indicated clear benefits of the variable training.
  • Sauermann, A., Höhle, B., Chen, A., & Järvikivi, J. (2011). Intonational marking of focus in different word orders in German children. In M. B. Washburn, K. McKinney-Bock, E. Varis, & A. Sawyer (Eds.), Proceedings of the 28th West Coast Conference on Formal Linguistics (pp. 313-322). Somerville, MA: Cascadilla Proceedings Project.

    Abstract

    The use of word order and intonation to mark focus in child speech has received some attention. However, past work usually examined each device separately or only compared the realizations of focused vs. non-focused constituents. This paper investigates the interaction between word order and intonation in the marking of different focus types in 4- to 5-year old German-speaking children and an adult control group. An answer-reconstruction task was used to elicit syntactic (word order) and intonational focus marking of subject and objects (locus of focus) in three focus types (broad, narrow, and contrastive focus). The results indicate that both children and adults used intonation to distinguish broad from contrastive focus but they differed in the marking of narrow focus. Further, both groups preferred intonation to word order as device for focus marking. But children showed an early sensitivity for the impact of focus type and focus location on word order variation and on phonetic means to mark focus.
  • Scharenborg, O., Ernestus, M., & Wan, V. (2007). Segmentation of speech: Child's play? In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1953-1956). Adelaide: Causal Productions.

    Abstract

    The difficulty of the task of segmenting a speech signal into its words is immediately clear when listening to a foreign language; it is much harder to segment the signal into its words, since the words of the language are unknown. Infants are faced with the same task when learning their first language. This study provides a better understanding of the task that infants face while learning their native language. We employed an automatic algorithm on the task of speech segmentation without prior knowledge of the labels of the phonemes. An analysis of the boundaries erroneously placed inside a phoneme showed that the algorithm consistently placed additional boundaries in phonemes in which acoustic changes occur. These acoustic changes may be as great as the transition from the closure to the burst of a plosive or as subtle as the formant transitions in low or back vowels. Moreover, we found that glottal vibration may attenuate the relevance of acoustic changes within obstruents. An interesting question for further research is how infants learn to overcome the natural tendency to segment these ‘dynamic’ phonemes.
  • Scharenborg, O., & Wan, V. (2007). Can unquantised articulatory feature continuums be modelled? In INTERSPEECH 2007 - 8th Annual Conference of the International Speech Communication Association (pp. 2473-2476). ISCA Archive.

    Abstract

    Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Although termed ‘articulatory’, previous definitions make certain assumptions that are invalid, for instance, that articulators ‘hop’ from one fixed position to the next. In this paper, we studied two methods, based on support vector classification (SVC) and regression (SVR), in which the articulation continuum is modelled without being restricted to using discrete AF value classes. A comparison with a baseline system trained on quantised values of the articulation continuum showed that both SVC and SVR outperform the baseline for two of the three investigated AFs, with improvements up to 5.6% absolute.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2006). Capturing fine-phonetic variation in speech through automatic classification of articulatory features. In Speech Recognition and Intrinsic Variation Workshop [SRIV2006] (pp. 77-82). ISCA Archive.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we compared support vector machines (SVMs) with multilayer perceptrons (MLPs). MLPs have been widely (and rather successfully) used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performances of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the MLPs outperformed the SVMs, but it is concluded that both classifiers exhibit similar behaviour in terms of patterns of errors.
  • Scharenborg, O., Mitterer, H., & McQueen, J. M. (2011). Perceptual learning of liquids. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 149-152).

    Abstract

    Previous research on lexically-guided perceptual learning has focussed on contrasts that differ primarily in local cues, such as plosive and fricative contrasts. The present research had two aims: to investigate whether perceptual learning occurs for a contrast with non-local cues, the /l/-/r/ contrast, and to establish whether STRAIGHT can be used to create ambiguous sounds on an /l/-/r/ continuum. Listening experiments showed lexically-guided learning about the /l/-/r/ contrast. Listeners can thus tune in to unusual speech sounds characterised by non-local cues. Moreover, STRAIGHT can be used to create stimuli for perceptual learning experiments, opening up new research possibilities. Index Terms: perceptual learning, morphing, liquids, human word recognition, STRAIGHT.
  • Scheu, O., & Zinn, C. (2007). How did the e-learning session go? The student inspector. In Proceedings of the 13th International Conference on Artificial Intelligence and Education (AIED 2007). Amsterdam: IOS Press.

    Abstract

    Good teachers know their students, and exploit this knowledge to adapt or optimise their instruction. Traditional teachers know their students because they interact with them face-to-face in classroom or one-to-one tutoring sessions. In these settings, they can build student models, i.e., by exploiting the multi-faceted nature of human-human communication. In distance-learning contexts, teacher and student have to cope with the lack of such direct interaction, and this must have detrimental effects for both teacher and student. In a past study we have analysed teacher requirements for tracking student actions in computer-mediated settings. Given the results of this study, we have devised and implemented a tool that allows teachers to keep track of their learners'interaction in e-learning systems. We present the tool's functionality and user interfaces, and an evaluation of its usability.
  • Schiller, N. O., Van Lieshout, P. H. H. M., Meyer, A. S., & Levelt, W. J. M. (1997). Is the syllable an articulatory unit in speech production? Evidence from an Emma study. In P. Wille (Ed.), Fortschritte der Akustik: Plenarvorträge und Fachbeiträge der 23. Deutschen Jahrestagung für Akustik (DAGA 97) (pp. 605-606). Oldenburg: DEGA.
  • Schoenmakers, G.-J., & De Swart, P. (2019). Adverbial hurdles in Dutch scrambling. In A. Gattnar, R. Hörnig, M. Störzer, & S. Featherston (Eds.), Proceedings of Linguistic Evidence 2018: Experimental Data Drives Linguistic Theory (pp. 124-145). Tübingen: University of Tübingen.

    Abstract

    This paper addresses the role of the adverb in Dutch direct object scrambling constructions. We report four experiments in which we investigate whether the structural position and the scope sensitivity of the adverb affect acceptability judgments of scrambling constructions and native speakers' tendency to scramble definite objects. We conclude that the type of adverb plays a key role in Dutch word ordering preferences.
  • Schuerman, W. L., McQueen, J. M., & Meyer, A. S. (2019). Speaker statistical averageness modulates word recognition in adverse listening conditions. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1203-1207). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    We tested whether statistical averageness (SA) at the level of the individual speaker could predict a speaker’s intelligibility. 28 female and 21 male speakers of Dutch were recorded producing 336 sentences,
    each containing two target nouns. Recordings were compared to those of all other same-sex speakers using dynamic time warping (DTW). For each sentence, the DTW distance constituted a metric
    of phonetic distance from one speaker to all other speakers. SA comprised the average of these distances. Later, the same participants performed a word recognition task on the target nouns in the same sentences, under three degraded listening conditions. In all three conditions, accuracy increased with SA. This held even when participants listened to their own utterances. These findings suggest that listeners process speech with respect to the statistical
    properties of the language spoken in their community, rather than using their own speech as a reference
  • Schulte im Walde, S., Melinger, A., Roth, M., & Weber, A. (2007). An empirical characterization of response types in German association norms. In Proceedings of the GLDV workshop on lexical-semantic and ontological resources.
  • Scott, S., & Sauter, D. (2006). Non-verbal expressions of emotion - acoustics, valence, and cross cultural factors. In Third International Conference on Speech Prosody 2006. ISCA.

    Abstract

    This presentation will address aspects of the expression of emotion in non-verbal vocal behaviour, specifically attempting to determine the roles of both positive and negative emotions, their acoustic bases, and the extent to which these are recognized in non-Western cultures.
  • Seidlmayer, E., Galke, L., Melnychuk, T., Schultz, C., Tochtermann, K., & Förstner, K. U. (2019). Take it personally - A Python library for data enrichment for infometrical applications. In M. Alam, R. Usbeck, T. Pellegrini, H. Sack, & Y. Sure-Vetter (Eds.), Proceedings of the Posters and Demo Track of the 15th International Conference on Semantic Systems co-located with 15th International Conference on Semantic Systems (SEMANTiCS 2019).

    Abstract

    Like every other social sphere, science is influenced by individual characteristics of researchers. However, for investigations on scientific networks, only little data about the social background of researchers, e.g. social origin, gender, affiliation etc., is available.
    This paper introduces ”Take it personally - TIP”, a conceptual model and library currently under development, which aims to support the
    semantic enrichment of publication databases with semantically related background information which resides elsewhere in the (semantic) web, such as Wikidata.
    The supplementary information enriches the original information in the publication databases and thus facilitates the creation of complex scientific knowledge graphs. Such enrichment helps to improve the scientometric analysis of scientific publications as they can also take social backgrounds of researchers into account and to understand social structure in research communities.
  • Seijdel, N., Sakmakidis, N., De Haan, E. H. F., Bohte, S. M., & Scholte, H. S. (2019). Implicit scene segmentation in deeper convolutional neural networks. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience (pp. 1059-1062). doi:10.32470/CCN.2019.1149-0.

    Abstract

    Feedforward deep convolutional neural networks (DCNNs) are matching and even surpassing human performance on object recognition. This performance suggests that activation of a loose collection of image
    features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Recent findings in humans however, suggest that while feedforward activity may suffice for
    sparse scenes with isolated objects, additional visual operations ('routines') that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to
    performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects
    and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicated less distinction between object- and background features for more shallow networks. For those networks, we observed a benefit of training on segmented objects (as compared to unsegmented objects). Overall, deeper networks trained on natural
    (unsegmented) scenes seem to perform implicit 'segmentation' of the objects from their background, possibly by improved selection of relevant features.
  • Senft, G., & Wilkins, D. (1995). A man, a tree, and forget about the pigs: Space games, spatial reference and cross-linguistic comparison. Plenary paper presented by at the 19th international LAUD symposium "Language and space" Duisburg. Mimeo: Nijmegen.
  • Senft, G. (2007). Language, culture and cognition: Frames of spatial reference and why we need ontologies of space [Abstract]. In A. G. Cohn, C. Freksa, & B. Bebel (Eds.), Spatial cognition: Specialization and integration (pp. 12).

    Abstract

    One of the many results of the "Space" research project conducted at the MPI for Psycholinguistics is that there are three "Frames of spatial Reference" (FoRs), the relative, the intrinsic and the absolute FoR. Cross-linguistic research showed that speakers who prefer one FoR in verbal spatial references rely on a comparable coding system for memorizing spatial configurations and for making inferences with respect to these spatial configurations in non-verbal problem solving. Moreover, research results also revealed that in some languages these verbal FoRs also influence gestural behavior. These results document the close interrelationship between language, culture and cognition in the domain "Space". The proper description of these interrelationships in the spatial domain requires language and culture specific ontologies.
  • Senft, G. (Ed.). (1997). Referring to space: Studies in Austronesian and Papuan languages. Oxford: Clarendon Press.
  • Senft, G. (2011). The Tuma underworld of love: Erotic and other narrative songs of the Trobriand Islanders and their spirits of the dead. Amsterdam: John Benjamins.

    Abstract

    The Trobriand Islanders' eschatological belief system explains what happens when someone dies. Bronislaw Malinowski described essentials of this eschatology in his articles "Baloma: the Spirits of the Dead in the Trobriand Islands" and "Myth in Primitive Psychology" There he also presented the Trobrianders' belief that a "baloma" can be reborn; he claimed that Trobrianders are unaware of the father's role as genitor. This volume presents a critical review of Malinowski's ethnography of Trobriand eschatology - finally settling the "virgin birth" controversy. It also documents the ritualized and highly poetic "wosi milamala" - the harvest festival songs. They are sung in an archaic variety of Kilivila called "biga baloma" - the baloma language. Malinowski briefly refers to these songs but does not mention that they codify many aspects of Trobriand eschatology. The songs are still sung at specific occasions; however, they are now moribund. With these songs Trobriand eschatology will vanish. The e-book is made available under a CC BY-NC-ND 4.0 license. http://creativecommons.org/licenses/by-nc-nd/4.0/
  • Seuren, P. A. M. (1980). Variabele competentie: Linguïstiek en sociolinguïstiek anno 1980. In Handelingen van het 36e Nederlands Filologencongres: Gehouden te Groningen op woensdag 9, donderdag 10 en vrijdag 11 April 1980 (pp. 41-56). Amsterdam: Holland University Press.
  • Shen, C., & Janse, E. (2019). Articulatory control in speech production. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 2533-2537). Canberra, Australia: Australasian Speech Science and Technology Association Inc.
  • Shen, C., Cooke, M., & Janse, E. (2019). Individual articulatory control in speech enrichment. In M. Ochmann, M. Vorländer, & J. Fels (Eds.), Proceedings of the 23rd International Congress on Acoustics (pp. 5726-5730). Berlin: Deutsche Gesellschaft für Akustik.

    Abstract

    ndividual talkers may use various strategies to enrich their speech while speaking in noise (i.e., Lombard speech) to improve their intelligibility. The resulting acoustic-phonetic changes in Lombard speech vary amongst different speakers, but it is unclear what causes these talker differences, and what impact these differences have on intelligibility. This study investigates the potential role of articulatory control in talkers’ Lombard speech enrichment success. Seventy-eight speakers read out sentences in both their habitual style and in a condition where they were instructed to speak clearly while hearing loud speech-shaped noise. A diadochokinetic (DDK) speech task that requires speakers to repetitively produce word or non-word sequences as accurately and as rapidly as possible, was used to quantify their articulatory control. Individuals’ predicted intelligibility in both speaking styles (presented at -5 dB SNR) was measured using an acoustic glimpse-based metric: the High-Energy Glimpse Proportion (HEGP). Speakers’ HEGP scores show a clear effect of speaking condition (better HEGP scores in the Lombard than habitual condition), but no simple effect of articulatory control on HEGP, nor an interaction between speaking condition and articulatory control. This indicates that individuals’ speech enrichment success as measured by the HEGP metric was not predicted by DDK performance.
  • Sloetjes, H., Somasundaram, A., & Wittenburg, P. (2011). ELAN — Aspects of Interoperability and Functionality. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011) (pp. 3249-3252).

    Abstract

    ELAN is a multimedia annotation tool that has been developed for roughly ten years now and is still being extended and improved in, on average, two or three major updates per year. This paper describes the current state of the application, the main areas of attention of the past few years and the plans for the near future. The emphasis will be on various interoperability issues: interoperability with other tools through file conversions, process based interoperability with other tools by means of commands send to or received from other applications, interoperability on the level of the data model and semantic interoperability.
  • Smith, A. C., & Monaghan, P. (2011). What are the functional units in reading? Evidence for statistical variation influencing word processing. In Connectionist Models of Neurocognition and Emergent Behavior: From Theory to Applications (pp. 159-172). Singapore: World Scientific.

    Abstract

    Computational models of reading have differed in terms of whether they propose a single route forming the mapping between orthography and phonology or whether there is a lexical/sublexical route distinction. A critical test of the architecture of the reading system is how it deals with multi-letter graphemes. Rastle and Coltheart (1998) found that the presence of digraphs in nonwords but not in words led to an increase in naming times, suggesting that nonwords were processed via a distinct sequential route to words. In contrast Pagliuca, Monaghan, and McIntosh (2008) implemented a single route model of reading and showed that under conditions of visual noise the presence of digraphs in words did have an effect on naming accuracy. In this study, we investigated whether such digraph effects could be found in both words and nonwords under conditions of visual noise. If so it would suggest that effects on words and nonwords are comparable. A single route connectionist model of reading showed greater accuracy for both words and nonwords containing digraphs. Experimental results showed participants were more accurate in recognising words if they contained digraphs. However contrary to model predictions they were less accurate in recognising nonwords containing digraphs compared to controls. We discuss the challenges faced by both theoretical perspectives in interpreting these findings and in light of a psycholinguistic grain size theory of reading.
  • Spapé, M., Verdonschot, R. G., & Van Steenbergen, H. (2019). The E-Primer: An introduction to creating psychological experiments in E-Prime® (2nd ed. updated for E-Prime 3). Leiden: Leiden University Press.

    Abstract

    E-Prime® is the leading software suite by Psychology Software Tools for designing and running Psychology lab experiments. The E-Primer is the perfect accompanying guide: It provides all the necessary knowledge to make E-Prime accessible to everyone. You can learn the tools of Psychological science by following the E-Primer through a series of entertaining, step-by-step recipes that recreate classic experiments. The updated E-Primer expands its proven combination of simple explanations, interesting tutorials and fun exercises, and makes even the novice student quickly confident to create their dream experiment.
  • Speed, L. J., O'Meara, C., San Roque, L., & Majid, A. (Eds.). (2019). Perception Metaphors. Amsterdam: Benjamins.

    Abstract

    Metaphor allows us to think and talk about one thing in terms of another, ratcheting up our cognitive and expressive capacity. It gives us concrete terms for abstract phenomena, for example, ideas become things we can grasp or let go of. Perceptual experience—characterised as physical and relatively concrete—should be an ideal source domain in metaphor, and a less likely target. But is this the case across diverse languages? And are some sensory modalities perhaps more concrete than others? This volume presents critical new data on perception metaphors from over 40 languages, including many which are under-studied. Aside from the wealth of data from diverse languages—modern and historical; spoken and signed—a variety of methods (e.g., natural language corpora, experimental) and theoretical approaches are brought together. This collection highlights how perception metaphor can offer both a bedrock of common experience and a source of continuing innovation in human communication
  • Staum Casasanto, L., Gijssels, T., & Casasanto, D. (2011). The Reverse-Chameleon Effect: Negative social consequences of anatomical mimicry.[Abstract]. In L. Carlson, C. Hölscher, & T. F. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 1103). Austin, TX: Cognitive Science Society.

    Abstract

    Mirror mimicry has well-known consequences for the person being mimicked: it increases how positively they feel about the mimicker (the Chameleon Effect). Here we show that anatomical mimicry has the opposite social consequences: a Reverse-Chameleon Effect. To equate mirror and anatomical mimicry, we asked participants to have a face-to-face conversation with a digital human (VIRTUO), in a fully-immersive virtual environment. Participants’ spontaneous head movements were tracked, and VIRTUO mimicked them at a 2-second delay, either mirror-wise, anatomically, or not at all (instead enacting another participant’s movements). Participants who were mimicked mirror-wise rated their social interaction with VIRTUO to be significantly more positive than those who were mimicked anatomically. Participants who were not mimicked gave intermediate ratings. Beyond its practical implications, the Reverse-Chameleon Effect constrains theoretical accounts of how mimicry affects social perception
  • Stehouwer, H., & Auer, E. (2011). Unlocking language archives using search. In C. Vertan, M. Slavcheva, P. Osenova, & S. Piperidis (Eds.), Proceedings of the Workshop on Language Technologies for Digital Humanities and Cultural Heritage, Hissar, Bulgaria, 16 September 2011 (pp. 19-26). Shoumen, Bulgaria: Incoma Ltd.

    Abstract

    The Language Archive manages one of the largest and most varied sets of natural language data. This data consists of video and audio enriched with annotations. It is available for more than 250 languages, many of which are endangered. Researchers have a need to access this data conveniently and efficiently. We provide several browse and search methods to cover this need, which have been developed and expanded over the years. Metadata and content-oriented search methods can be connected for a more focused search. This article aims to provide a complete overview of the available search mechanisms, with a focus on annotation content search, including a benchmark.
  • Stevens, M. A., McQueen, J. M., & Hartsuiker, R. J. (2007). No lexically-driven perceptual adjustments of the [x]-[h] boundary. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1897-1900). Dudweiler: Pirrot.

    Abstract

    Listeners can make perceptual adjustments to phoneme categories in response to a talker who consistently produces a specific phoneme ambiguously. We investigate here whether this type of perceptual learning is also used to adapt to regional accent differences. Listeners were exposed to words produced by a Flemish talker whose realization of [x℄or [h℄ was ambiguous (producing [x℄like [h℄is a property of the West-Flanders regional accent). Before and after exposure they categorized a [x℄-[h℄continuum. For both Dutch and Flemish listeners there was no shift of the categorization boundary after exposure to ambiguous sounds in [x℄- or [h℄-biasing contexts. The absence of a lexically-driven learning effect for this contrast may be because [h℄is strongly influenced by coarticulation. As is not stable across contexts, it may be futile to adapt its representation when new realizations are heard
  • Stivers, T. (2007). Prescribing under pressure: Parent-physician conversations and antibiotics. Oxford: Oxford University Press.

    Abstract

    This book examines parent-physician conversations in detail, showing how parents put pressure on doctors in largely covert ways, for instance in specific communication practices for explaining why they have brought their child to the doctor or answering a history-taking question. This book also shows how physicians yield to this seemingly subtle pressure evidencing that apparently small differences in wording have important consequences for diagnosis and treatment recommendations. Following parents use of these interactional practices, physicians are more likely to make concessions, alter their diagnosis or alter their treatment recommendation. This book also shows how small changes in the way physicians present their findings and recommendations can decrease parent pressure for antibiotics. This book carefully documents the important and observable link between micro social interaction and macro public health domains.
  • Stivers, T., Mondada, L., & Steensig, J. (Eds.). (2011). The morality of knowledge in conversation. Cambridge: Cambridge University Press.

    Abstract

    Each time we take a turn in conversation we indicate what we know and what we think others know. However, knowledge is neither static nor absolute. It is shaped by those we interact with and governed by social norms - we monitor one another for whether we are fulfilling our rights and responsibilities with respect to knowledge, and for who has relatively more rights to assert knowledge over some state of affairs. This book brings together an international team of leading linguists, sociologists and anthropologists working across a range of European and Asian languages to document some of the ways in which speakers manage the moral domain of knowledge in conversation. The volume demonstrates that if we are to understand how speakers manage issues of agreement, affiliation and alignment - something clearly at the heart of human sociality - we must understand the social norms surrounding epistemic access, primacy and responsibilities
  • Sulpizio, S., & McQueen, J. M. (2011). When two newly-acquired words are one: New words differing in stress alone are not automatically represented differently. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 1385-1388).

    Abstract

    Do listeners use lexical stress at an early stage in word learning? Artificial-lexicon studies have shown that listeners can learn new spoken words easily. These studies used non-words differing in consonants and/or vowels, but not differing only in stress. If listeners use stress information in word learning, they should be able to learn new words that differ only in stress (e.g., BInulo-biNUlo). We investigated this issue here. When learning new words, Italian listeners relied on segmental information; they did not take stress information into account. Newly-acquired words differing in stress alone are not automatically represented as different words.
  • Ten Bosch, L., Baayen, R. H., & Ernestus, M. (2006). On speech variation and word type differentiation by articulatory feature representations. In Proceedings of Interspeech 2006 (pp. 2230-2233).

    Abstract

    This paper describes ongoing research aiming at the description of variation in speech as represented by asynchronous articulatory features. We will first illustrate how distances in the articulatory feature space can be used for event detection along speech trajectories in this space. The temporal structure imposed by the cosine distance in articulatory feature space coincides to a large extent with the manual segmentation on phone level. The analysis also indicates that the articulatory feature representation provides better such alignments than the MFCC representation does. Secondly, we will present first results that indicate that articulatory features can be used to probe for acoustic differences in the onsets of Dutch singulars and plurals.
  • Ten Bosch, L., Mulder, K., & Boves, L. (2019). Phase synchronization between EEG signals as a function of differences between stimuli characteristics. In Proceedings of Interspeech 2019 (pp. 1213-1217). doi:10.21437/Interspeech.2019-2443.

    Abstract

    The neural processing of speech leads to specific patterns in the brain which can be measured as, e.g., EEG signals. When properly aligned with the speech input and averaged over many tokens, the Event Related Potential (ERP) signal is able to differentiate specific contrasts between speech signals. Well-known effects relate to the difference between expected and unexpected words, in particular in the N400, while effects in N100 and P200 are related to attention and acoustic onset effects. Most EEG studies deal with the amplitude of EEG signals over time, sidestepping the effect of phase and phase synchronization. This paper investigates the relation between phase in the EEG signals measured in an auditory lexical decision task by Dutch participants listening to full and reduced English word forms. We show that phase synchronization takes place across stimulus conditions, and that the so-called circular variance is narrowly related to the type of contrast between stimuli.
  • ten Bosch, L., Hämäläinen, A., Scharenborg, O., & Boves, L. (2006). Acoustic scores and symbolic mismatch penalties in phone lattices. In Proceedings of the 2006 IEEE International Conference on Acoustics, Speech and Signal Processing [ICASSP 2006]. IEEE.

    Abstract

    This paper builds on previous work that aims at unraveling the structure of the speech signal by means of using probabilistic representations. The context of this work is a multi-pass speech recognition system in which a phone lattice is created and used as a basis for a lexical search in which symbolic mismatches are allowed at certain costs. The focus is on the optimization of the costs of phone insertions, deletions and substitutions that are used in the lexical decoding pass. Two optimization approaches are presented, one related to a multi-pass computational model for human speech recognition, the other based on a decoding in which Bayes’ risks are minimized. In the final section, the advantages of these optimization methods are discussed and compared.
  • Ten Bosch, L., Hämäläinen, A., & Ernestus, M. (2011). Assessing acoustic reduction: Exploiting local structure in speech. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 2665-2668).

    Abstract

    This paper presents a method to quantify the spectral characteristics of reduction in speech. Hämäläinen et al. (2009) proposes a measure of spectral reduction which is able to predict a substantial amount of the variation in duration that linguistically motivated variables do not account for. In this paper, we continue studying acoustic reduction in speech by developing a new acoustic measure of reduction, based on local manifold structure in speech. We show that this measure yields significantly improved statistical models for predicting variation in duration.
  • Ter Bekke, M., Ozyurek, A., & Ünal, E. (2019). Speaking but not gesturing predicts motion event memory within and across languages. In A. Goel, C. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2940-2946). Montreal, QB: Cognitive Science Society.

    Abstract

    In everyday life, people see, describe and remember motion events. We tested whether the type of motion event information (path or manner) encoded in speech and gesture predicts which information is remembered and if this varies across speakers of typologically different languages. We focus on intransitive motion events (e.g., a woman running to a tree) that are described differently in speech and co-speech gesture across languages, based on how these languages typologically encode manner and path information (Kita & Özyürek, 2003; Talmy, 1985). Speakers of Dutch (n = 19) and Turkish (n = 22) watched and described motion events. With a surprise (i.e. unexpected) recognition memory task, memory for manner and path components of these events was measured. Neither Dutch nor Turkish speakers’ memory for manner went above chance levels. However, we found a positive relation between path speech and path change detection: participants who described the path during encoding were more accurate at detecting changes to the path of an event during the memory task. In addition, the relation between path speech and path memory changed with native language: for Dutch speakers encoding path in speech was related to improved path memory, but for Turkish speakers no such relation existed. For both languages, co-speech gesture did not predict memory speakers. We discuss the implications of these findings for our understanding of the relations between speech, gesture, type of encoding in language and memory.
  • Tice, M., & Henetz, T. (2011). Turn-boundary projection: Looking ahead. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 838-843). Austin, TX: Cognitive Science Society.

    Abstract

    Coordinating with others is hard; and yet we accomplish this every day when we take turns in a conversation. How do we do this? The present study introduces a new method of measuring turn-boundary projection that enables researchers to achieve more valid, flexible, and temporally informative data on online turn projection: tracking an observer’s gaze from the current speaker to the next speaker. In this preliminary investigation, participants consistently looked at the current speaker during their turn. Additionally, they looked to the next speaker before her turn began, and sometimes even before the current speaker finished speaking. This suggests that observer gaze is closely aligned with perceptual processes of turn-boundary projection, and thus may equip the field with the tools to explore how we manage to take turns.
  • Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2019). Learning to produce difficult L2 vowels: The effects of awareness-rasing, exposure and feedback. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 1094-1098). Canberra, Australia: Australasian Speech Science and Technology Association Inc.
  • Tschöpel, S., Schneider, D., Bardeli, R., Schreer, O., Masneri, S., Wittenburg, P., Sloetjes, H., Lenkiewicz, P., & Auer, E. (2011). AVATecH: Audio/Video technology for humanities research. In C. Vertan, M. Slavcheva, P. Osenova, & S. Piperidis (Eds.), Proceedings of the Workshop on Language Technologies for Digital Humanities and Cultural Heritage, Hissar, Bulgaria, 16 September 2011 (pp. 86-89). Shoumen, Bulgaria: Incoma Ltd.

    Abstract

    In the AVATecH project the Max-Planck Institute for Psycholinguistics (MPI) and the Fraunhofer institutes HHI and IAIS aim to significantly speed up the process of creating annotations of audio-visual data for humanities research. For this we integrate state-of-theart audio and video pattern recognition algorithms into the widely used ELAN annotation tool. To address the problem of heterogeneous annotation tasks and recordings we provide modular components extended by adaptation and feedback mechanisms to achieve competitive annotation quality within significantly less annotation time. Currently we are designing a large-scale end-user evaluation of the project.
  • Tuinman, A. (2006). Overcompensation of /t/ reduction in Dutch by German/Dutch bilinguals. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 101-102).
  • Tuinman, A., Mitterer, H., & Cutler, A. (2007). Speakers differentiate English intrusive and onset /r/, but L2 listeners do not. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1905-1908). Dudweiler: Pirrot.

    Abstract

    We investigated whether non-native listeners can exploit phonetic detail in recognizing potentially ambiguous utterances, as native listeners can [6, 7, 8, 9, 10]. Due to the phenomenon of intrusive /r/, the English phrase extra ice may sound like extra rice. A production study indicates that the intrusive /r/ can be distinguished from the onset /r/ in rice, as it is phonetically weaker. In two cross-modal identity priming studies, however, we found no conclusive evidence that Dutch learners of English are able to make use of this difference. Instead, auditory primes such as extra rice and extra ice with onset and intrusive /r/s activate both types of targets such as ice and rice. This supports the notion of spurious lexical activation in L2 perception.
  • Tuinman, A., Mitterer, H., & Cutler, A. (2011). The efficiency of cross-dialectal word recognition. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 153-156).

    Abstract

    Dialects of the same language can differ in the casual speech processes they allow; e.g., British English allows the insertion of [r] at word boundaries in sequences such as saw ice, while American English does not. In two speeded word recognition experiments, American listeners heard such British English sequences; in contrast to non-native listeners, they accurately perceived intended vowel-initial words even with intrusive [r]. Thus despite input mismatches, cross-dialectal word recognition benefits from the full power of native-language processing.
  • Turco, G., Gubian, M., & Schertz, J. (2011). A quantitative investigation of the prosody of Verum Focus in Italian. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 961-964).

    Abstract

    prosodic marking of Verum focus (VF) in Italian, which is said to be realized with a pitch accent on the finite verb (e.g. A: Paul has not eaten the banana - B: (No), Paul HAS eaten the banana!). We tried to discover whether and how Italian speakers prosodically mark VF when producing full-fledged sentences using a semi-spontaneous production experiment on 27 speakers. Speech rate and f0 contours were extracted using automatic data processing tools and were subsequently analysed using Functional Data Analysis (FDA), which allowed for automatic visualization of patterns in the contour shapes. Our results show that the postfocal region of VF sentences exhibit faster speech rate and lower f0 compared to non-VF cases. However, an expected consistent difference of f0 effect on the focal region of the VF sentence was not found in this analysis.
  • Van Alphen, P. M., De Bree, E., Fikkert, P., & Wijnen, F. (2007). The role of metrical stress in comprehension and production of Dutch children at risk of dyslexia. In Proceedings of Interspeech 2007 (pp. 2313-2316). Adelaide: Causal Productions.

    Abstract

    The present study compared the role of metrical stress in comprehension and production of three-year-old children with a familial risk of dyslexia with that of normally developing children. A visual fixation task with stress (mis-)matches in bisyllabic words, as well as a non-word repetition task with bisyllabic targets were presented to the control and at-risk children. Results show that the at-risk group is less sensitive to stress mismatches in word recognition than the control group. Correct production of metrical stress patterns did not differ significantly between the groups, but the percentages of phonemes produced correctly were lower for the at-risk than the control group. The findings indicate that processing of metrical stress patterns is not impaired in at-risk children, but that the at-risk group cannot exploit metrical stress in word recognition
  • Van Dooren, A., Tulling, M., Cournane, A., & Hacquard, V. (2019). Discovering modal polysemy: Lexical aspect might help. In M. Brown, & B. Dailey (Eds.), BUCLD 43: Proceedings of the 43rd annual Boston University Conference on Language Development (pp. 203-216). Sommerville, MA: Cascadilla Press.
  • Van Hout, A., Veenstra, A., & Berends, S. (2011). All pronouns are not acquired equally in Dutch: Elicitation of object and quantitative pronouns. In M. Pirvulescu, M. C. Cuervo, A. T. Pérez-Leroux, J. Steele, & N. Strik (Eds.), Selected proceedings of the 4th Conference on Generative Approaches to Language Acquisition North America (GALANA 2010) (pp. 106-121). Somerville, MA: Cascadilla Proceedings Project.

    Abstract

    This research reports the results of eliciting pronouns in two syntactic environments: Object pronouns and quantitative er (Q-er). Thus another type of language is added to the literature on subject and object clitic acquisition in the Romance languages (Jakubowicz et al., 1998; Hamann et al., 1996). Quantitative er is a unique pronoun in the Germanic languages; it has the same distribution as partitive clitics in Romance. Q-er is an N'-anaphor and occurs obligatorily with headless noun phrases with a numeral or weak quantifier. Q-er is licensed only when the context offers an antecedent; it binds an empty position in the NP. Data from typically-developing children aged 5;0-6;0 show that object and Q-er pronouns are not acquired equally; it is proposed that this is due to their different syntax. The use of Q-er involves more sophisticated syntactic knowledge: Q-er occurs at the left edge of the VP and binds an empty position in the NP, whereas object pronouns are simply stand-ins for full NPs and occur in the same position. These Dutch data reveal that pronouns are not used as exclusively as object clitics are in the Romance languages (Varlakosta, in prep.).
  • Van Valin Jr., R. D. (1987). Aspects of the interaction of syntax and pragmatics: Discourse coreference mechanisms and the typology of grammatical systems. In M. Bertuccelli Papi, & J. Verschueren (Eds.), The pragmatic perspective: Selected papers from the 1985 International Pragmatics Conference (pp. 513-531). Amsterdam: Benjamins.
  • Van den Bos, E. J., & Poletiek, F. H. (2006). Implicit artificial grammar learning in adults and children. In R. Sun (Ed.), Proceedings of the 28th Annual Conference of the Cognitive Science Society (CogSci 2006) (pp. 2619). Austin, TX, USA: Cognitive Science Society.
  • Van de Weijer, J. (1997). Language input to a prelingual infant. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the GALA '97 conference on language acquisition (pp. 290-293). Edinburgh University Press.

    Abstract

    Pitch, intonation, and speech rate were analyzed in a collection of everyday speech heard by one Dutch infant between the ages of six and nine months. Components of each of these variables were measured in the speech of three adult speakers (mother, father, baby-sitter) when they addressed the infant, and when they addressed another adult. The results are in line with previously reported findings which are usually based on laboratory or prearranged settings: infant-directed speech in a natural setting exhibits more pitch variation, a larger number of simple intonation contours, and slower speech rate than does adult-directed speech.
  • Van Heuven, V. J., Haan, J., Janse, E., & Van der Torre, E. J. (1997). Perceptual identification of sentence type and the time-distribution of prosodic interrogativity markers in Dutch. In Proceedings of the ESCA Tutorial and Research Workshop on Intonation: Theory, Models and Applications, Athens, Greece, 1997 (pp. 317-320).

    Abstract

    Dutch distinguishes at least four sentence types: statements and questions, the latter type being subdivided into wh-questions (beginning with a question word), yes/no-questions (with inversion of subject and finite), and declarative questions (lexico-syntactically identical to statement). Acoustically, each of these (sub)types was found to have clearly distinct global F0-patterns, as well as a characteristic distribution of final rises [1,2]. The present paper explores the separate contribution of parameters of global downtrend and size of accent-lending pitch movements versus aspects of the terminal rise to the human identification of the four sentence (sub)types, at various positions in the time-course of the utterance. The results show that interrogativity in Dutch can be identified at an early point in the utterance. However, wh-questions are not distinct from statements.
  • Van Gijn, R., Haude, K., & Muysken, P. (Eds.). (2011). Subordination in native South American languages. Amsterdam: Benjamins.

    Abstract

    In terms of its linguistic and cultural make-up, the continent of South America provides linguists and anthropologists with a complex puzzle of language diversity. The continent teems with small language families and isolates, and even languages spoken in adjacent areas can be typologically vastly different from each other. This volume intends to provide a taste of the linguistic diversity found in South America within the area of clause subordination. The potential variety in the strategies that languages can use to encode subordinate events is enormous, yet there are clearly dominant patterns to be discerned: switch reference marking, clause chaining, nominalization, and verb serialization. The book also contributes to the continuing debate on the nature of syntactic complexity, as evidenced in subordination.
  • Van Valin Jr., R. D., & LaPolla, R. J. (1997). Syntax: Structure, meaning and function. Cambridge University Press.
  • Van Valin Jr., R. D. (1987). Pragmatics, island phenomena, and linguistic competence. In A. M. Farley, P. T. Farley, & K.-E. McCullough (Eds.), CLS 22. Papers from the parasession on pragmatics and grammatical theory (pp. 223-233). Chicago Linguistic Society.
  • Van Berkum, J. J. A. (2011). Zonder gevoel geen taal [Inaugural lecture].

    Abstract

    Onderzoek naar taal en communicatie heeft zich in het verleden veel te veel gericht op taal als systeem om berichten te coderen, een soort TCP/IP (netwerkprotocol voor communicatie tussen computers). Dat moet maar eens veranderen, stelt prof. dr. Jos van Berkum, hoogleraar Discourse, Cognitie en Communicatie, in zijn oratie die hij op 30 september zal houden aan de Universiteit Utrecht. Hij pleit voor meer onderzoek naar de sterke verwevenheid van taal en gevoel.
  • Vapnarsky, V., & Le Guen, O. (2011). The guardians of space: Understanding ecological and historical relations of the contemporary Yucatec Mayas to their landscape. In C. Isendahl, & B. Liljefors Persson (Eds.), Ecology, Power, and Religion in Maya Landscapes: Proceedings of the 11th European Maya Conference. Acta Mesoamericano. vol. 23. Markt Schwaben: Saurwein.
  • Versteegh, M., Ten Bosch, L., & Boves, L. (2011). Modelling novelty preference in word learning. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 761-764).

    Abstract

    This paper investigates the effects of novel words on a cognitively plausible computational model of word learning. The model is first familiarized with a set of words, achieving high recognition scores and subsequently offered novel words for training. We show that the model is able to recognize the novel words as different from the previously seen words, based on a measure of novelty that we introduce. We then propose a procedure analogous to novelty preference in infants. Results from simulations of word learning show that adding this procedure to our model speeds up training and helps the model attain higher recognition rates.
  • Verweij, H., Windhouwer, M., & Wittenburg, P. (2011). Knowledge management for small languages. In V. Luzar-Stiffler, I. Jarec, & Z. Bekic (Eds.), Proceedings of the ITI 2011 33rd Int. Conf. on Information Technology Interfaces, June 27-30, 2011, Cavtat, Croatia (pp. 213-218). Zagreb, Croatia: University Computing Centre, University of Zagreb.

    Abstract

    In this paper an overview of the knowledge components needed for extensive documentation of small languages is given. The Language Archive is striving to offer all these tools to the linguistic community. The major tools in relation to the knowledge components are described. Followed by a discussion on what is currently lacking and possible strategies to move forward.
  • De Vos, C. (2006). Mixed signals: Combining affective and linguistic functions of eyebrows in sign language of The Netherlands (Master's thesis). Nijmegen: Department of Linguistics, Radboud University.

    Abstract

    Sign Language of the Netherlands (NGT) is a visual-gestural language in which linguistic information is conveyed through manual as well as non-manual channels; not only the hands, but also body position, head position and facial expression are important for the language structure. Facial expressions serve grammatical functions in the marking of topics, yes/no questions, and wh-questions (Coerts, 1992). Furthermore, facial expression is used nonlinguistically in the expression of affect (Ekman, 1979). Consequently, at the phonetic level obligatory marking of grammar using facial expression may conflict with the expression of affect. In this study, I investigated the interplay of linguistic and affective functions of brow movements in NGT. Three hypotheses were tested in this thesis. The first is that the affective markers of eyebrows would dominate over the linguistic markers. The second hypothesis predicts that the grammatical markers dominate over the affective brow movements. A third possibility is that a Phonetic Sum would occur in which both functions are combined simultaneously. I elicited sentences combining grammatical and affective functions of eyebrows using a randomised design. Five sentence types were included: declarative sentences, topic sentences, yes-no questions, wh-questions with the wh-sign sentence-final and wh-questions with the wh-sign sentence-initial. These sentences were combined with neutral, surprised, angry, and distressed affect. The brow movements were analysed using the Facial Action Coding System (Ekman, Friesen, & Hager, 2002a). In these sentences, the eyebrows serve a linguistic function, an affective function, or both. One of the possibilities in the latter cases was that a Phonetic Sum would occur that combines both functions simultaneously. Surprisingly, it was found that a Phonetic Sum occurs in which the phonetic weight of Action Unit 4 appears to play an important role. The results show that affect displays may alter question signals in NGT.
  • Vuong, L., Meyer, A. S., & Christiansen, M. H. (2011). Simultaneous online tracking of adjacent and non-adjacent dependencies in statistical learning. In L. Carlson, C. Hölscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 964-969). Austin, TX: Cognitive Science Society.
  • Wagner, M. A., Broersma, M., McQueen, J. M., & Lemhöfer, K. (2019). Imitating speech in an unfamiliar language and an unfamiliar non-native accent in the native language. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1362-1366). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    This study concerns individual differences in speech imitation ability and the role that lexical representations play in imitation. We examined 1) whether imitation of sounds in an unfamiliar language (L0) is related to imitation of sounds in an unfamiliar
    non-native accent in the speaker’s native language (L1) and 2) whether it is easier or harder to imitate speech when you know the words to be imitated. Fifty-nine native Dutch speakers imitated words with target vowels in Basque (/a/ and /e/) and Greekaccented
    Dutch (/i/ and /u/). Spectral and durational
    analyses of the target vowels revealed no relationship between the success of L0 and L1 imitation and no difference in performance between tasks (i.e., L1
    imitation was neither aided nor blocked by lexical knowledge about the correct pronunciation). The results suggest instead that the relationship of the vowels to native phonological categories plays a bigger role in imitation
  • Wagner, M., Tran, D., Togneri, R., Rose, P., Powers, D., Onslow, M., Loakes, D., Lewis, T., Kuratate, T., Kinoshita, Y., Kemp, N., Ishihara, S., Ingram, J., Hajek, J., Grayden, D., Göcke, R., Fletcher, J., Estival, D., Epps, J., Dale, R. and 11 moreWagner, M., Tran, D., Togneri, R., Rose, P., Powers, D., Onslow, M., Loakes, D., Lewis, T., Kuratate, T., Kinoshita, Y., Kemp, N., Ishihara, S., Ingram, J., Hajek, J., Grayden, D., Göcke, R., Fletcher, J., Estival, D., Epps, J., Dale, R., Cutler, A., Cox, F., Chetty, G., Cassidy, S., Butcher, A., Burnham, D., Bird, S., Best, C., Bennamoun, M., Arciuli, J., & Ambikairajah, E. (2011). The Big Australian Speech Corpus (The Big ASC). In M. Tabain, J. Fletcher, D. Grayden, J. Hajek, & A. Butcher (Eds.), Proceedings of the Thirteenth Australasian International Conference on Speech Science and Technology (pp. 166-170). Melbourne: ASSTA.
  • Weber, A., Melinger, A., & Lara Tapia, L. (2007). The mapping of phonetic information to lexical presentations in Spanish: Evidence from eye movements. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1941-1944). Dudweiler: Pirrot.

    Abstract

    In a visual-world study, we examined spoken-wordrecognition in Spanish. Spanish listeners followed spoken instructions to click on pictures while their eye movements were monitored. When instructed to click on the picture of a door (puerta), they experienced interference from the picture of a pig (p u e r c o ). The same interference from phonologically related items was observed when the displays contained printed names or a combination of pictures with their names printed underneath, although the effect was strongest for displays with printed names. Implications of the finding that the interference effect can be induced with standard pictorial displays as well as with orthographic displays are discussed.
  • Widlok, T. (2006). Two ways of looking at a Mangetti grove. In A. Takada (Ed.), Proceedings of the workshop: Landscape and society (pp. 11-16). Kyoto: 21st Century Center of Excellence Program.
  • Witteman, M. J., Bardhan, N. P., Weber, A., & McQueen, J. M. (2011). Adapting to foreign-accented speech: The role of delay in testing. Journal of the Acoustical Society of America. Program abstracts of the 162nd Meeting of the Acoustical Society of America, 130(4), 2443.

    Abstract

    Understanding speech usually seems easy, but it can become noticeably harder when the speaker has a foreign accent. This is because foreign accents add considerable variation to speech. Research on foreign-accented speech shows that participants are able to adapt quickly to this type of variation. Less is known, however, about longer-term maintenance of adaptation. The current study focused on long-term adaptation by exposing native listeners to foreign-accented speech on Day 1, and testing them on comprehension of the accent one day later. Comprehension was thus not tested immediately, but only after a 24 hour period. On Day 1, native Dutch listeners listened to the speech of a Hebrew learner of Dutch while performing a phoneme monitoring task that did not depend on the talker’s accent. In particular, shortening of the long vowel /i/ into /ɪ/ (e.g., lief [li:f], ‘sweet’, pronounced as [lɪf]) was examined. These mispronunciations did not create lexical ambiguities in Dutch. On Day 2, listeners participated in a cross-modal priming task to test their comprehension of the accent. The results will be contrasted with results from an experiment without delayed testing and related to accounts of how listeners maintain adaptation to foreign-accented speech.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2011). On the relationship between perceived accentedness, acoustic similarity, and processing difficulty in foreign-accented speech. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 2229-2232).

    Abstract

    Foreign-accented speech is often perceived as more difficult to understand than native speech. What causes this potential difficulty, however, remains unknown. In the present study, we compared acoustic similarity and accent ratings of American-accented Dutch with a cross-modal priming task designed to measure online speech processing. We focused on two Dutch diphthongs: ui and ij. Though both diphthongs deviated from standard Dutch to varying degrees and perceptually varied in accent strength, native Dutch listeners recognized words containing the diphthongs easily. Thus, not all foreign-accented speech hinders comprehension, and acoustic similarity and perceived accentedness are not always predictive of processing difficulties.
  • Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., & Sloetjes, H. (2006). ELAN: a professional framework for multimodality research. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 1556-1559).

    Abstract

    Utilization of computer tools in linguistic research has gained importance with the maturation of media frameworks for the handling of digital audio and video. The increased use of these tools in gesture, sign language and multimodal interaction studies has led to stronger requirements on the flexibility, the efficiency and in particular the time accuracy of annotation tools. This paper describes the efforts made to make ELAN a tool that meets these requirements, with special attention to the developments in the area of time accuracy. In subsequent sections an overview will be given of other enhancements in the latest versions of ELAN, that make it a useful tool in multimodality research.
  • Wittenburg, P., Broeder, D., Klein, W., Levinson, S. C., & Romary, L. (2006). Foundations of modern language resource archives. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 625-628).

    Abstract

    A number of serious reasons will convince an increasing amount of researchers to store their relevant material in centers which we will call "language resource archives". They combine the duty of taking care of long-term preservation as well as the task to give access to their material to different user groups. Access here is meant in the sense that an active interaction with the data will be made possible to support the integration of new data, new versions or commentaries of all sort. Modern Language Resource Archives will have to adhere to a number of basic principles to fulfill all requirements and they will have to be involved in federations to create joint language resource domains making it even more simple for the researchers to access the data. This paper makes an attempt to formulate the essential pillars language resource archives have to adhere to.
  • Wolf, M. C., Smith, A. C., Meyer, A. S., & Rowland, C. F. (2019). Modality effects in vocabulary acquisition. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 1212-1218). Montreal, QB: Cognitive Science Society.

    Abstract

    It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials.
  • Zeshan, U. (Ed.). (2006). Interrogative and negative constructions in sign languages. Nijmegen: Ishara Press.

Share this page