Comprehension Dept Publications

Publications Language and Comprehension

Displaying 301 - 320 of 836
  • Torreira, F., Adda-Decker, M., & Ernestus, M. (2010). The Nijmegen corpus of casual French. Speech Communication, 52, 201-212. doi:10.1016/j.specom.2009.10.004.

    Abstract

    This article describes the preparation, recording and orthographic transcription of a new speech corpus, the Nijmegen Corpus of Casual French (NCCFr). The corpus contains a total of over 36 h of recordings of 46 French speakers engaged in conversations with friends. Casual speech was elicited during three different parts, which together provided around 90 min of speech from every pair of speakers. While Parts 1 and 2 did not require participants to perform any specific task, in Part 3 participants negotiated a common answer to general questions about society. Comparisons with the ESTER corpus of journalistic speech show that the two corpora contain speech of considerably different registers. A number of indicators of casualness, including swear words, casual words, verlan, disfluencies and word repetitions, are more frequent in the NCCFr than in the ESTER corpus, while the use of double negation, an indicator of formal speech, is less frequent. In general, these estimates of casualness are constant through the three parts of the recording sessions and across speakers. Based on these facts, we conclude that our corpus is a rich resource of highly casual speech, and that it can be effectively exploited by researchers in language science and technology.

    Files private

    Request files
  • Sadakata, M., Van der Zanden, L., & Sekiyama, K. (2010). Influence of musical training on perception of L2 speech. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 118-121).

    Abstract

    The current study reports specific cases in which a positive transfer of perceptual ability from the music domain to the language domain occurs. We tested whether musical training enhances discrimination and identification performance of L2 speech sounds (timing features, nasal consonants and vowels). Native Dutch and Japanese speakers with different musical training experience, matched for their estimated verbal IQ, participated in the experiments. Results indicated that musical training strongly increases one’s ability to perceive timing information in speech signals. We also found a benefit of musical training on discrimination performance for a subset of the tested vowel contrasts.
  • Huettig, F., & Hartsuiker, R. J. (2010). Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception. Language and Cognitive Processes, 3, 347 -374. doi:10.1080/01690960903046926.

    Abstract

    Theories of verbal self-monitoring generally assume an internal (pre-articulatory) monitoring channel, but there is debate about whether this channel relies on speech perception or on production-internal mechanisms. Perception-based theories predict that listening to one's own inner speech has similar behavioral consequences as listening to someone else's speech. Our experiment therefore registered eye-movements while speakers named objects accompanied by phonologically related or unrelated written words. The data showed that listening to one's own speech drives eye-movements to phonologically related words, just as listening to someone else's speech does in perception experiments. The time-course of these eye-movements was very similar to that in other-perception (starting 300 ms post-articulation), which demonstrates that these eye-movements were driven by the perception of overt speech, not inner speech. We conclude that external, but not internal monitoring, is based on speech perception.
  • Andics, A., McQueen, J. M., Petersson, K. M., Gál, V., Rudas, G., & Vidnyánszky, Z. (2010). Neural mechanisms for voice recognition. NeuroImage, 52, 1528-1540. doi:10.1016/j.neuroimage.2010.05.048.

    Abstract

    We investigated neural mechanisms that support voice recognition in a training paradigm with fMRI. The same listeners were trained on different weeks to categorize the mid-regions of voice-morph continua as an individual's voice. Stimuli implicitly defined a voice-acoustics space, and training explicitly defined a voice-identity space. The predefined centre of the voice category was shifted from the acoustic centre each week in opposite directions, so the same stimuli had different training histories on different tests. Cortical sensitivity to voice similarity appeared over different time-scales and at different representational stages. First, there were short-term adaptation effects: Increasing acoustic similarity to the directly preceding stimulus led to haemodynamic response reduction in the middle/posterior STS and in right ventrolateral prefrontal regions. Second, there were longer-term effects: Response reduction was found in the orbital/insular cortex for stimuli that were most versus least similar to the acoustic mean of all preceding stimuli, and, in the anterior temporal pole, the deep posterior STS and the amygdala, for stimuli that were most versus least similar to the trained voice-identity category mean. These findings are interpreted as effects of neural sharpening of long-term stored typical acoustic and category-internal values. The analyses also reveal anatomically separable voice representations: one in a voice-acoustics space and one in a voice-identity space. Voice-identity representations flexibly followed the trained identity shift, and listeners with a greater identity effect were more accurate at recognizing familiar voices. Voice recognition is thus supported by neural voice spaces that are organized around flexible ‘mean voice’ representations.
  • Reinisch, E. (2010). Processing the fine temporal structure of spoken words. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Van de Ven, M., Tucker, B. V., & Ernestus, M. (2010). Semantic facilitation in bilingual everyday speech comprehension. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (Interspeech 2010), Makuhari, Japan (pp. 1245-1248).

    Abstract

    Previous research suggests that bilinguals presented with low and high predictability sentences benefit from semantics in clear but not in conversational speech [1]. In everyday speech, however, many words are not highly predictable. Previous research has shown that native listeners can use also more subtle semantic contextual information [2]. The present study reports two auditory lexical decision experiments investigating to what extent late Asian-English bilinguals benefit from subtle semantic cues in their processing of English unreduced and reduced speech. Our results indicate that these bilinguals are less sensitive to semantic cues than native listeners for both speech registers.
  • Sjerps, M. J., & Reinisch, E. (2009). Speaking rate and spectral context affect the Dutch /a/ - /aa/ contrast. Poster presented at 12th NVP Winter Conference on Cognition, Brain, and Behaviour (Dutch Psychonomic Society), Egmond aan Zee, the Netherlands.

    Abstract

    Dutch minimal word pairs such as 'gaas'-'gas' ("gauze"-"gas") differ in durational and spectral aspects of their vowels. These cues, however, are interpreted relative to the context in which they are heard. In a fast context, an "a" sounds relatively longer and is more likely to be interpreted as "aa". Similarly, when low frequencies in a context are perceived as dominant, high frequencies in the "a" become more salient, again more often leading to perception of "aa". A categorization experiment in which durational and spectral cues to the vowels were varied confirmed that Dutch listeners use both dimensions to distinguish between "a" and "aa". In Experiment 2, words were presented in rate- and spectrally manipulated sentences. Listeners, as predicted, interpreted the vowels relative to the context. An eye-tracking experiment will investigate the time course of these context effects and thus inform theories of the role of context in speech recognition.
  • Janse, E. (2009). Processing of fast speech by elderly listeners. Journal of the Acoustical Society of America, 125(4), 2361-2373. doi:10.1121/1.3082117.

    Abstract

    This study investigates the relative contributions of auditory and cognitive factors to the common finding that an increase in speech rate affects elderly listeners more than young listeners. Since a direct relation between non-auditory factors, such as age-related cognitive slowing, and fast speech performance has been difficult to demonstrate, the present study took an on-line, rather than off-line, approach and focused on processing time. Elderly and young listeners were presented with speech at two rates of time compression and were asked to detect pre-assigned target words as quickly as possible. A number of auditory and cognitive measures were entered in a statistical model as predictors of elderly participants’ fast speech performance: hearing acuity, an information processing rate measure, and two measures of reading speed. The results showed that hearing loss played a primary role in explaining elderly listeners’ increased difficulty with fast speech. However, non-auditory factors such as reading speed and the extent to which participants were affected by
    increased rate of presentation in a visual analog of the listening experiment also predicted fast
    speech performance differences among the elderly participants. These on-line results confirm that slowed information processing is indeed part of elderly listeners’ problem keeping up with fast language
  • Janse, E. (2009). Hearing and cognitive measures predict elderly listeners' difficulty ignoring competing speech. Talk presented at NAG-DAGA International Conference on Acoustics. Rotterdam, The Netherlands. 2009-03-23 - 2009-03-26.
  • Calandruccio, L., Brouwer, S., Van Engen, K. J., Dhar, S., & Bradlow, A. R. (2009). Non-native speech perception in the presence of competing speech noise. Talk presented at 2009 ASHA Convention (American Speech-Language-Hearing Association). New Orleans, LA. 2009-11-19 - 2009-11-21.
  • Janse, E., & Adank, P. (2009). Perceptual learning of a foreign accent in young and elderly listeners. Poster presented at Aging and Speech Communication interdisciplinary research conference, Indiana University, Bloomington, IN.

    Abstract

    In this study we investigated perceptual learning of a foreign accent in young and elderly listeners by testing speech-perception thresholds over consecutive blocks of speech materials. Participants (20 young and 30 elderly) were first presented with four blocks of Standard Dutch sentences to establish their baseline speech-perception thresholds (SRTs). They were then presented with four sentence blocks spoken by the same speaker, but now in an artificial foreign accent of Dutch in which pronunciation of all vowel phonemes was systematically altered. We studied whether young and elderly listeners show similar-sized effects of accent on their SRTs and similar amounts of adaptation. SRTs in both age groups were higher in the accented than in the non-accented condition. In the accented condition, SRTs decreased more over blocks than in the non-accented condition, indicating that listeners adapted to the accent. Importantly, a triple interaction between speech type, block and age group indicated that the pattern of adaptation to the accent differed for the age groups: whereas the elderly hardly show further adaptation beyond the second block, the young adults do show further improvement with longer exposure. Among the elderly participants, hearing acuity predicted one's SRT and predicted the accent effect on one's SRT. Furthermore, a measure of executive function predicted the impact of the accent on one's SRT. In sum, these results indicate that accentedness is more detrimental to speech understanding in elderly than in young adults. This seems to be due both to poorer hearing and decreased mental flexibility in the elderly.
  • Janse, E., & Ernestus, M. (2009). Recognition of reduced words, context use, and age-related hearing loss. Talk presented at Dag van de Fonetiek (Annual day of the Dutch Phonetics Association). Utrecht, The Netherlands. 2009-12-18.
  • Huettig, F. (2009). The role of colour during language-vision interactions. Talk presented at International Conference on Language-Cognition Interface 2009. Allahabad, India. 2009-12-06 - 2009-12-09.
  • Huettig, F. (2009). Language-mediated visual search. Talk presented at Invited talk at VU Amsterdam. Amsterdam.
  • Lemhöfer, K., & Broersma, M. (2009). LexTALE: A quick, but valid measure for English proficiency. Poster presented at 15th Annual conference on architectures and mechanisms for language processing [AMLaP 2009], Barcelona, Spain.
  • Huettig, F. (2009). On the use of distributional models of semantic space to investigate human cognition. Talk presented at Distributional Semantics beyond Concrete Concepts (Workshop at Annual Meeting of the Cognitive Science Society (CogSci 2009). Amsterdam, The Netherlands. 2009-07-29 - 2009-01-08.
  • Schuppler, B., Van Dommelen, W., Koreman, J., & Ernestus, M. (2009). Word-final [t]-deletion: An analysis on the segmental and sub-segmental level. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 2275-2278). Causal Productions Pty Ltd.

    Abstract

    This paper presents a study on the reduction of word-final [t]s in conversational standard Dutch. Based on a large amount of tokens annotated on the segmental level, we show that the bigram frequency and the segmental context are the main predictors for the absence of [t]s. In a second study, we present an analysis of the detailed acoustic properties of word-final [t]s and we show that bigram frequency and context also play a role on the subsegmental level. This paper extends research on the realization of /t/ in spontaneous speech and shows the importance of incorporating sub-segmental properties in models of speech.
  • Adank, P., & Janse, E. (2009). Perceptual adaptation to an unfamiliar accent in young and elderly listeners. Talk presented at British Society of Audiology Short Papers Meeting on Experimental Studies of Hearing and Deafness. University of Southampton, UK. 2009-09-18.
  • Broersma, M. (2009). A lamp in the evil empire: When nonnative listeners hear words that aren’t really there. Talk presented at 7th International Symposium on Bilingualism. Utrecht, The Netherlands. 2009-07-08 - 2009-07-08.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2009). Speaking rate context affects online word segmentation: Evidence from eye-tracking. Talk presented at "Speech perception and production in the brain" Summer Workshop of the Dutch Phonetic Society (NVFW). Leiden, the Netherlands. 2009-06-05.

Share this page