Publications

Displaying 201 - 210 of 210
  • Wagner, M. A., Broersma, M., McQueen, J. M., & Lemhöfer, K. (2019). Imitating speech in an unfamiliar language and an unfamiliar non-native accent in the native language. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1362-1366). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    This study concerns individual differences in speech imitation ability and the role that lexical representations play in imitation. We examined 1) whether imitation of sounds in an unfamiliar language (L0) is related to imitation of sounds in an unfamiliar
    non-native accent in the speaker’s native language (L1) and 2) whether it is easier or harder to imitate speech when you know the words to be imitated. Fifty-nine native Dutch speakers imitated words with target vowels in Basque (/a/ and /e/) and Greekaccented
    Dutch (/i/ and /u/). Spectral and durational
    analyses of the target vowels revealed no relationship between the success of L0 and L1 imitation and no difference in performance between tasks (i.e., L1
    imitation was neither aided nor blocked by lexical knowledge about the correct pronunciation). The results suggest instead that the relationship of the vowels to native phonological categories plays a bigger role in imitation
  • Weber, A., & Melinger, A. (2008). Name dominance in spoken word recognition is (not) modulated by expectations: Evidence from synonyms. In A. Botinis (Ed.), Proceedings of ISCA Tutorial and Research Workshop On Experimental Linguistics (ExLing 2008) (pp. 225-228). Athens: University of Athens.

    Abstract

    Two German eye-tracking experiments tested whether top-down expectations interact with acoustically-driven word-recognition processes. Competitor objects with two synonymous names were paired with target objects whose names shared word onsets with either the dominant or the non-dominant name of the competitor. Non-dominant names of competitor objects were either introduced before the test session or not. Eye-movements were monitored while participants heard instructions to click on target objects. Results demonstrate dominant and non-dominant competitor names were considered for recognition, regardless of top-down expectations, though dominant names were always activated more strongly.
  • Weber, A. (2000). Phonotactic and acoustic cues for word segmentation in English. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP 2000) (pp. 782-785).

    Abstract

    This study investigates the influence of both phonotactic and acoustic cues on the segmentation of spoken English. Listeners detected embedded English words in nonsense sequences (word spotting). Words aligned with phonotactic boundaries were easier to detect than words without such alignment. Acoustic cues to boundaries could also have signaled word boundaries, especially when word onsets lacked phonotactic alignment. However, only one of several durational boundary cues showed a marginally significant correlation with response times (RTs). The results suggest that word segmentation in English is influenced primarily by phonotactic constraints and only secondarily by acoustic aspects of the speech signal.
  • Weber, A. (2000). The role of phonotactics in the segmentation of native and non-native continuous speech. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP, Workshop on Spoken Word Access Processes. Nijmegen: MPI for Psycholinguistics.

    Abstract

    Previous research has shown that listeners make use of their knowledge of phonotactic constraints to segment speech into individual words. The present study investigates the influence of phonotactics when segmenting a non-native language. German and English listeners detected embedded English words in nonsense sequences. German listeners also had knowledge of English, but English listeners had no knowledge of German. Word onsets were either aligned with a syllable boundary or not, according to the phonotactics of the two languages. Words aligned with either German or English phonotactic boundaries were easier for German listeners to detect than words without such alignment. Responses of English listeners were influenced primarily by English phonotactic alignment. The results suggest that both native and non-native phonotactic constraints influence lexical segmentation of a non-native, but familiar, language.
  • Weber, A. (2008). What the eyes can tell us about spoken-language comprehension [Abstract]. Journal of the Acoustical Society of America, 124, 2474-2474.

    Abstract

    Lexical recognition is typically slower in L2 than in L1. Part of the difficulty comes from a not precise enough processing of L2 phonemes. Consequently, L2 listeners fail to eliminate candidate words that L1 listeners can exclude from competing for recognition. For instance, the inability to distinguish /r/ from /l/ in rocket and locker makes for Japanese listeners both words possible candidates when hearing their onset (e.g., Cutler, Weber, and Otake, 2006). The L2 disadvantage can, however, be dispelled: For L2 listeners, but not L1 listeners, L2 speech from a non-native talker with the same language background is known to be as intelligible as L2 speech from a native talker (e.g., Bent and Bradlow, 2003). A reason for this may be that L2 listeners have ample experience with segmental deviations that are characteristic for their own accent. On this account, only phonemic deviations that are typical for the listeners’ own accent will cause spurious lexical activation in L2 listening (e.g., English magic pronounced as megic for Dutch listeners). In this talk, I will present evidence from cross-modal priming studies with a variety of L2 listener groups, showing how the processing of phonemic deviations is accent-specific but withstands fine phonetic differences.
  • Wolf, M. C., Smith, A. C., Meyer, A. S., & Rowland, C. F. (2019). Modality effects in vocabulary acquisition. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 1212-1218). Montreal, QB: Cognitive Science Society.

    Abstract

    It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials.
  • Zinn, C., Cablitz, G., Ringersma, J., Kemps-Snijders, M., & Wittenburg, P. (2008). Constructing knowledge spaces from linguistic resources. In Proceedings of the CIL 18 Workshop on Linguistic Studies of Ontology: From lexical semantics to formal ontologies and back.
  • Zinn, C. (2008). Conceptual spaces in ViCoS. In S. Bechhofer, M. Hauswirth, J. Hoffmann, & M. Koubarakis (Eds.), The semantic web: Research and applications (pp. 890-894). Berlin: Springer.

    Abstract

    We describe ViCoS, a tool for constructing and visualising conceptual spaces in the area of language documentation. ViCoS allows users to enrich existing lexical information about the words of a language with conceptual knowledge. Their work towards language-based, informal ontology building must be supported by easy-to-use workflows and supporting software, which we will demonstrate.
  • De Zubicaray, G. I., Acheson, D. J., & Hartsuiker, R. J. (Eds.). (2013). Mind what you say - general and specific mechanisms for monitoring in speech production [Research topic] [Special Issue]. Frontiers in Human Neuroscience. Retrieved from http://www.frontiersin.org/human_neuroscience/researchtopics/mind_what_you_say_-_general_an/1197.

    Abstract

    Psycholinguistic research has typically portrayed speech production as a relatively automatic process. This is because when errors are made, they occur as seldom as one in every thousand words we utter. However, it has long been recognised that we need some form of control over what we are currently saying and what we plan to say. This capacity to both monitor our inner speech and self-correct our speech output has often been assumed to be a property of the language comprehension system. More recently, it has been demonstrated that speech production benefits from interfacing with more general cognitive processes such as selective attention, short-term memory (STM) and online response monitoring to resolve potential conflict and successfully produce the output of a verbal plan. The conditions and levels of representation according to which these more general planning, monitoring and control processes are engaged during speech production remain poorly understood. Moreover, there remains a paucity of information about their neural substrates, despite some of the first evidence of more general monitoring having come from electrophysiological studies of error related negativities (ERNs). While aphasic speech errors continue to be a rich source of information, there has been comparatively little research focus on instances of speech repair. The purpose of this Frontiers Research Topic is to provide a forum for researchers to contribute investigations employing behavioural, neuropsychological, electrophysiological, neuroimaging and virtual lesioning techniques. In addition, while the focus of the research topic is on novel findings, we welcome submission of computational simulations, review articles and methods papers.
  • Zwitserlood, I., Ozyurek, A., & Perniss, P. M. (2008). Annotation of sign and gesture cross-linguistically. In O. Crasborn, E. Efthimiou, T. Hanke, E. D. Thoutenhoofd, & I. Zwitserlood (Eds.), Construction and Exploitation of Sign Language Corpora. 3rd Workshop on the Representation and Processing of Sign Languages (pp. 185-190). Paris: ELDA.

    Abstract

    This paper discusses the construction of a cross-linguistic, bimodal corpus containing three modes of expression: expressions from two sign languages, speech and gestural expressions in two spoken languages and pantomimic expressions by users of two spoken languages who are requested to convey information without speaking. We discuss some problems and tentative solutions for the annotation of utterances expressing spatial information about referents in these three modes, suggesting a set of comparable codes for the description of both sign and gesture. Furthermore, we discuss the processing of entered annotations in ELAN, e.g. relating descriptive annotations to analytic annotations in all three modes and performing relational searches across annotations on different tiers.

Share this page