Displaying 1 - 41 of 41
-
Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Listening with great expectations: An investigation of word form anticipations in naturalistic speech. In Proceedings of Interspeech 2019 (pp. 2265-2269). doi:10.21437/Interspeech.2019-2741.
Abstract
The event-related potential (ERP) component named phonological mismatch negativity (PMN) arises when listeners hear an unexpected word form in a spoken sentence [1]. The PMN is thought to reflect the mismatch between expected and perceived auditory speech input. In this paper, we use the PMN to test a central premise in the predictive coding framework [2], namely that the mismatch between prior expectations and sensory input is an important mechanism of perception. We test this with natural speech materials containing approximately 50,000 word tokens. The corresponding EEG-signal was recorded while participants (n = 48) listened to these materials. Following [3], we quantify the mismatch with two word probability distributions (WPD): a WPD based on preceding context, and a WPD that is additionally updated based on the incoming audio of the current word. We use the between-WPD cross entropy for each word in the utterances and show that a higher cross entropy correlates with a more negative PMN. Our results show that listeners anticipate auditory input while processing each word in naturalistic speech. Moreover, complementing previous research, we show that predictive language processing occurs across the whole probability spectrum. -
Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Quantifying expectation modulation in human speech processing. In Proceedings of Interspeech 2019 (pp. 2270-2274). doi:10.21437/Interspeech.2019-2685.
Abstract
The mismatch between top-down predicted and bottom-up perceptual input is an important mechanism of perception according to the predictive coding framework (Friston, [1]). In this paper we develop and validate a new information-theoretic measure that quantifies the mismatch between expected and observed auditory input during speech processing. We argue that such a mismatch measure is useful for the study of speech processing. To compute the mismatch measure, we use naturalistic speech materials containing approximately 50,000 word tokens. For each word token we first estimate the prior word probability distribution with the aid of statistical language modelling, and next use automatic speech recognition to update this word probability distribution based on the unfolding speech signal. We validate the mismatch measure with multiple analyses, and show that the auditory-based update improves the probability of the correct word and lowers the uncertainty of the word probability distribution. Based on these results, we argue that it is possible to explicitly estimate the mismatch between predicted and perceived speech input with the cross entropy between word expectations computed before and after an auditory update. -
Bentum, M., Ten Bosch, L., Van den Bosch, A., & Ernestus, M. (2019). Do speech registers differ in the predictability of words? International Journal of Corpus Linguistics, 24(1), 98-130. doi:10.1075/ijcl.17062.ben.
Abstract
Previous research has demonstrated that language use can vary depending on the context of situation. The present paper extends this finding by comparing word predictability differences between 14 speech registers ranging from highly informal conversations to read-aloud books. We trained 14 statistical language models to compute register-specific word predictability and trained a register classifier on the perplexity score vector of the language models. The classifier distinguishes perfectly between samples from all speech registers and this result generalizes to unseen materials. We show that differences in vocabulary and sentence length cannot explain the speech register classifier’s performance. The combined results show that speech registers differ in word predictability. -
Eijk, L., Ernestus, M., & Schriefers, H. (2019). Alignment of pitch and articulation rate. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (
Eds. ), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 2690-2694). Canberra, Australia: Australasian Speech Science and Technology Association Inc.Abstract
Previous studies have shown that speakers align their speech to each other at multiple linguistic levels. This study investigates whether alignment is mostly the result of priming from the immediately preceding
speech materials, focussing on pitch and articulation rate (AR). Native Dutch speakers completed sentences, first by themselves (pre-test), then in alternation with Confederate 1 (Round 1), with Confederate 2 (Round 2), with Confederate 1 again
(Round 3), and lastly by themselves again (post-test). Results indicate that participants aligned to the confederates and that this alignment lasted during the post-test. The confederates’ directly preceding sentences were not good predictors for the participants’ pitch and AR. Overall, the results indicate that alignment is more of a global effect than a local priming effect. -
Felker, E. R., Ernestus, M., & Broersma, M. (2019). Evaluating dictation task measures for the study of speech perception. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (
Eds. ), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 383-387). Canberra, Australia: Australasian Speech Science and Technology Association Inc.Abstract
This paper shows that the dictation task, a well-
known testing instrument in language education, has
untapped potential as a research tool for studying
speech perception. We describe how transcriptions
can be scored on measures of lexical, orthographic,
phonological, and semantic similarity to target
phrases to provide comprehensive information about
accuracy at different processing levels. The former
three measures are automatically extractable,
increasing objectivity, and the middle two are
gradient, providing finer-grained information than
traditionally used. We evaluate the measures in an
English dictation task featuring phonetically reduced
continuous speech. Whereas the lexical and
orthographic measures emphasize listeners’ word
identification difficulties, the phonological measure
demonstrates that listeners can often still recover
phonological features, and the semantic measure
captures their ability to get the gist of the utterances.
Correlational analyses and a discussion of practical
and theoretical considerations show that combining
multiple measures improves the dictation task’s
utility as a research tool. -
Felker, E. R., Ernestus, M., & Broersma, M. (2019). Lexically guided perceptual learning of a vowel shift in an interactive L2 listening context. In Proceedings of Interspeech 2019 (pp. 3123-3127). doi:10.21437/Interspeech.2019-1414.
Abstract
Lexically guided perceptual learning has traditionally been studied with ambiguous consonant sounds to which native listeners are exposed in a purely receptive listening context. To extend previous research, we investigate whether lexically guided learning applies to a vowel shift encountered by non-native listeners in an interactive dialogue. Dutch participants played a two-player game in English in either a control condition, which contained no evidence for a vowel shift, or a lexically constraining condition, in which onscreen lexical information required them to re-interpret their interlocutor’s /ɪ/ pronunciations as representing /ε/. A phonetic categorization pre-test and post-test were used to assess whether the game shifted listeners’ phonemic boundaries such that more of the /ε/-/ɪ/ continuum came to be perceived as /ε/. Both listener groups showed an overall post-test shift toward /ɪ/, suggesting that vowel perception may be sensitive to directional biases related to properties of the speaker’s vowel space. Importantly, listeners in the lexically constraining condition made relatively more post-test /ε/ responses than the control group, thereby exhibiting an effect of lexically guided adaptation. The results thus demonstrate that non-native listeners can adjust their phonemic boundaries on the basis of lexical information to accommodate a vowel shift learned in interactive conversation. -
Koppen, K., Ernestus, M., & Van Mulken, M. (2019). The influence of social distance on speech behavior: Formality variation in casual speech. Corpus Linguistics and Linguistic Theory, 15(1), 139-165. doi:10.1515/cllt-2016-0056.
Abstract
An important dimension of linguistic variation is formality. This study investigates the role of social distance between interlocutors. Twenty-five native Dutch speakers retold eight short films to confederates, who acted either formally or informally. Speakers were familiarized with the informal confederates, whereas the formal confederates remained strangers. Results show that the two types of interlocutors elicited different versions of the same stories. Formal interlocutors (large social distance) elicited lower articulation rates, and more nouns and prepositions, both indicators of explicit information. Speakers addressing interlocutors to whom social distance was small, however, provided more explicit information with an involved character (i.e. adjectives with subjective meanings). They also used the word and more often as a gap filler or as a way to keep the floor. Furthermore, they were more likely to laugh and to use more interjections, first-person pronouns and direct speech, which are all indicators of involvement, empathy and subjectivity.Files private
Request files -
Marcoux, K., & Ernestus, M. (2019). Differences between native and non-native Lombard speech in terms of pitch range. In M. Ochmann, M. Vorländer, & J. Fels (
Eds. ), Proceedings of the ICA 2019 and EAA Euroregio. 23rd International Congress on Acoustics, integrating 4th EAA Euroregio 2019 (pp. 5713-5720). Berlin: Deutsche Gesellschaft für Akustik.Abstract
Lombard speech, speech produced in noise, is acoustically different from speech produced in quiet (plain speech) in several ways, including having a higher and wider F0 range (pitch). Extensive research on native Lombard speech does not consider that non-natives experience a higher cognitive load while producing
speech and that the native language may influence the non-native speech. We investigated pitch range in plain and Lombard speech in native and non-natives.
Dutch and American-English speakers read contrastive question-answer pairs in quiet and in noise in English, while the Dutch also read Dutch sentence pairs. We found that Lombard speech is characterized by a wider pitch range than plain speech, for all speakers (native English, non-native English, and native Dutch).
This shows that non-natives also widen their pitch range in Lombard speech. In sentences with early-focus, we see the same increase in pitch range when going from plain to Lombard speech in native and non-native English, but a smaller increase in native Dutch. In sentences with late-focus, we see the biggest increase for the native English, followed by non-native English and then native Dutch. Together these results indicate an effect of the native language on non-native Lombard speech. -
Marcoux, K., & Ernestus, M. (2019). Pitch in native and non-native Lombard speech. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (
Eds. ), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 2605-2609). Canberra, Australia: Australasian Speech Science and Technology Association Inc.Abstract
Lombard speech, speech produced in noise, is
typically produced with a higher fundamental
frequency (F0, pitch) compared to speech in quiet. This paper examined the potential differences in native and non-native Lombard speech by analyzing median pitch in sentences with early- or late-focus produced in quiet and noise. We found an increase in pitch in late-focus sentences in noise for Dutch speakers in both English and Dutch, and for American-English speakers in English. These results
show that non-native speakers produce Lombard speech, despite their higher cognitive load. For the early-focus sentences, we found a difference between the Dutch and the American-English speakers. Whereas the Dutch showed an increased F0 in noise
in English and Dutch, the American-English speakers did not in English. Together, these results suggest that some acoustic characteristics of Lombard speech, such as pitch, may be language-specific, potentially
resulting in the native language influencing the non-native Lombard speech. -
Merkx, D., Frank, S., & Ernestus, M. (2019). Language learning using speech to image retrieval. In Proceedings of Interspeech 2019 (pp. 1841-1845). doi:10.21437/Interspeech.2019-3067.
Abstract
Humans learn language by interaction with their environment and listening to other humans. It should also be possible for computational models to learn language directly from speech but so far most approaches require text. We improve on existing neural network approaches to create visually grounded embeddings for spoken utterances. Using a combination of a multi-layer GRU, importance sampling, cyclic learning rates, ensembling and vectorial self-attention our results show a remarkable increase in image-caption retrieval performance over previous work. Furthermore, we investigate which layers in the model learn to recognise words in the input. We find that deeper network layers are better at encoding word presence, although the final layer has slightly lower performance. This shows that our visually grounded sentence encoder learns to recognise words from the input even though it is not explicitly trained for word recognition. -
Nijveld, A., Ten Bosch, L., & Ernestus, M. (2019). ERP signal analysis with temporal resolution using a time window bank. In Proceedings of Interspeech 2019 (pp. 1208-1212). doi:10.21437/Interspeech.2019-2729.
Abstract
In order to study the cognitive processes underlying speech comprehension, neuro-physiological measures (e.g., EEG and MEG), or behavioural measures (e.g., reaction times and response accuracy) can be applied. Compared to behavioural measures, EEG signals can provide a more fine-grained and complementary view of the processes that take place during the unfolding of an auditory stimulus.
EEG signals are often analysed after having chosen specific time windows, which are usually based on the temporal structure of ERP components expected to be sensitive to the experimental manipulation. However, as the timing of ERP components may vary between experiments, trials, and participants, such a-priori defined analysis time windows may significantly hamper the exploratory power of the analysis of components of interest. In this paper, we explore a wide-window analysis method applied to EEG signals collected in an auditory repetition priming experiment.
This approach is based on a bank of temporal filters arranged along the time axis in combination with linear mixed effects modelling. Crucially, it permits a temporal decomposition of effects in a single comprehensive statistical model which captures the entire EEG trace. -
Rodd, J., Bosker, H. R., Ten Bosch, L., & Ernestus, M. (2019). Deriving the onset and offset times of planning units from acoustic and articulatory measurements. The Journal of the Acoustical Society of America, 145(2), EL161-EL167. doi:10.1121/1.5089456.
Abstract
Many psycholinguistic models of speech sequence planning make claims about the onset and offset times of planning units, such as words, syllables, and phonemes. These predictions typically go untested, however, since psycholinguists have assumed that the temporal dynamics of the speech signal is a poor index of the temporal dynamics of the underlying speech planning process. This article argues that this problem is tractable, and presents and validates two simple metrics that derive planning unit onset and offset times from the acoustic signal and articulatographic data. -
Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2019). Learning to produce difficult L2 vowels: The effects of awareness-rasing, exposure and feedback. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (
Eds. ), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 1094-1098). Canberra, Australia: Australasian Speech Science and Technology Association Inc. -
Braun, B., Dainora, A., & Ernestus, M. (2011). An unfamiliar intonation contour slows down online speech comprehension. Language and Cognitive Processes, 26(3), 350 -375. doi:10.1080/01690965.2010.492641.
Abstract
This study investigates whether listeners' familiarity with an intonation contour affects speech processing. In three experiments, Dutch participants heard Dutch sentences with normal intonation contours and with unfamiliar ones and performed word-monitoring, lexical decision, or semantic categorisation tasks (the latter two with cross-modal identity priming). The unfamiliar intonation contour slowed down participants on all tasks, which demonstrates that an unfamiliar intonation contour has a robust detrimental effect on speech processing. Since cross-modal identity priming with a lexical decision task taps into lexical access, this effect obtained in this task suggests that an unfamiliar intonation contour hinders lexical access. Furthermore, results from the semantic categorisation task show that the effect of an uncommon intonation contour is long-lasting and hinders subsequent processing. Hence, intonation not only contributes to utterance meaning (emotion, sentence type, and focus), but also affects crucial aspects of the speech comprehension process and is more important than previously thought. -
Brenner, D., Warner, N., Ernestus, M., & Tucker, B. V. (2011). Parsing the ambiguity of casual speech: “He was like” or “He’s like”? [Abstract]. The Journal of the Acoustical Society of America, 129(4 Pt. 2), 2683.
Abstract
Paper presented at The 161th Meeting Acoustical Society of America, Seattle, Washington, 23-27 May 2011. Reduction in casual speech can create ambiguity, e.g., “he was” can sound like “he’s.” Before quotative “like” “so she’s/she was like…”, it was found that there is little accurate acoustic information about the distinction in the signal. This work examines what types of information acoustics of the target itself, speech rate, coarticulation, and syntax/semantics listeners use to recognize such reduced function words. We compare perception studies presenting the targets auditorily with varying amounts of context, presenting the context without the targets, and a visual study presenting context in written form. Given primarily discourse information visual or auditory context only, subjects are strongly biased toward past, reflecting the use of quotative “like” for reporting past speech. However, if the target itself is presented, the direction of bias reverses, indicating that listeners favor acoustic information within the target which is reduced, sounding like the shorter, present form over almost any other source of information. Furthermore, when the target is presented auditorily with surrounding context, the bias shifts slightly toward the direction shown in the orthographic or auditory-no-target experiments. Thus, listeners prioritize acoustic information within the target when present, even if that information is misleading, but they also take discourse information into account. -
Bürki, A., Ernestus, M., Gendrot, C., Fougeron, C., & Frauenfelder, U. H. (2011). What affects the presence versus absence of schwa and its duration: A corpus analysis of French connected speech. Journal of the Acoustical Society of America, 130, 3980-3991. doi:10.1121/1.3658386.
Abstract
This study presents an analysis of over 4000 tokens of words produced as variants with and without schwa in a French corpus of radio-broadcasted speech. In order to determine which of the many variables mentioned in the literature influence variant choice, 17 predictors were tested in the same analysis. Only five of these variables appeared to condition variant choice. The question of the processing stage, or locus, of this alternation process is also addressed in a comparison of the variables that predict variant choice with the variables that predict the acoustic duration of schwa in variants with schwa. Only two variables predicting variant choice also predict schwa duration. The limited overlap between the predictors for variant choice and for schwa duration, combined with the nature of these variables, suggest that the variants without schwa do not result from a phonetic process of reduction; that is, they are not the endpoint of gradient schwa shortening. Rather, these variants are generated early in the production process, either during phonological encoding or word-form retrieval. These results, based on naturally produced speech, provide a useful complement to on-line production experiments using artificial speech tasks. -
Ernestus, M., & Baayen, R. H. (2011). Corpora and exemplars in phonology. In J. A. Goldsmith, J. Riggle, & A. C. Yu (
Eds. ), The handbook of phonological theory (2nd ed.) (pp. 374-400). Oxford: Wiley-Blackwell. -
Ernestus, M., & Warner, N. (2011). An introduction to reduced pronunciation variants [Editorial]. Journal of Phonetics, 39(SI), 253-260. doi:10.1016/S0095-4470(11)00055-6.
Abstract
Words are often pronounced very differently in formal speech than in everyday conversations. In conversational speech, they may contain weaker segments, fewer sounds, and even fewer syllables. The English word yesterday, for instance, may be pronounced as [j epsilon integral eI]. This article forms an introduction to the phenomenon of reduced pronunciation variants and to the eight research articles in this issue on the characteristics, production, and comprehension of these variants. We provide a description of the phenomenon, addressing its high frequency of occurrence in casual conversations in various languages, the gradient nature of many reduction processes, and the intelligibility of reduced variants to native listeners. We also describe the relevance of research on reduced variants for linguistic and psychological theories as well as for applications in speech technology and foreign language acquisition. Since reduced variants occur more often in spontaneous than in formal speech, they are hard to study in the laboratory under well controlled conditions. We discuss the advantages and disadvantages of possible solutions, including the research methods employed in the articles in this special issue, based on corpora and experiments. This article ends with a short overview of the articles in this issue. -
Ernestus, M. (2011). Gradience and categoricality in phonological theory. In M. Van Oostendorp, C. J. Ewen, E. Hume, & K. Rice (
Eds. ), The Blackwell companion to phonology (pp. 2115-2136). Wiley-Blackwell. -
Ernestus, M., & Warner, N. (
Eds. ). (2011). Speech reduction [Special Issue]. Journal of Phonetics, 39(SI). -
Hanique, I., & Ernestus, M. (2011). Final /t/ reduction in Dutch past-participles: The role of word predictability and morphological decomposability. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 2849-2852).
Abstract
This corpus study demonstrates that the realization of wordfinal /t/ in Dutch past-participles in various speech styles is affected by a word’s predictability and paradigmatic relative frequency. In particular, /t/s are shorter and more often absent if the two preceding words are more predictable. In addition, /t/s, especially in irregular verbs, are more reduced, the lower the verb’s lemma frequency relative to the past-participle’s frequency. Both effects are more pronounced in more spontaneous speech. These findings are expected if speech planning plays an important role in speech reduction. Index Terms: pronunciation variation, acoustic reduction, corpus research, word predictability, morphological decomposability -
Janse, E., & Ernestus, M. (2011). The roles of bottom-up and top-down information in the recognition of reduced speech: Evidence from listeners with normal and impaired hearing. Journal of Phonetics, 39(3), 330-343. doi:10.1016/j.wocn.2011.03.005.
-
Kuzla, C., & Ernestus, M. (2011). Prosodic conditioning of phonetic detail in German plosives. Journal of Phonetics, 39, 143-155. doi:10.1016/j.wocn.2011.01.001.
Abstract
This study investigates the prosodic conditioning of phonetic details which are candidate cues to phonological contrasts. German /b, d, g, p, t, k/ were examined in three prosodic positions. Lenis plosives /b, d, g/ were produced with less glottal vibration at larger prosodic boundaries, whereas their VOT showed no effect of prosody. VOT of fortis plosives /p, t, k/ decreased at larger boundaries, as did their burst intensity maximum. Vowels (when measured from consonantal release) following fortis plosives and lenis velars were shorter after larger boundaries. Closure duration, which did not contribute to the fortis/lenis contrast, was heavily affected by prosody. These results support neither of the hitherto proposed accounts of prosodic strengthening (Uniform Strengthening and Feature Enhancement). We propose a different account, stating that the phonological identity of speech sounds remains stable not only within, but also across prosodic positions (contrast-over-prosody hypothesis). Domain-initial strengthening hardly diminishes the contrast between prosodically weak fortis and strong lenis plosives. -
Schuppler, B., Ernestus, M., Scharenborg, O., & Boves, L. (2011). Acoustic reduction in conversational Dutch: A quantitative analysis based on automatically generated segmental transcriptions [Letter to the editor]. Journal of Phonetics, 39(1), 96-109. doi:10.1016/j.wocn.2010.11.006.
Abstract
In spontaneous, conversational speech, words are often reduced compared to their citation forms, such that a word like yesterday may sound like [’jεsmall eshei]. The present chapter investigates such acoustic reduction. The study of reduction needs large corpora that are transcribed phonetically. The first part of this chapter describes an automatic transcription procedure used to obtain such a large phonetically transcribed corpus of Dutch spontaneous dialogues, which is subsequently used for the investigation of acoustic reduction. First, the orthographic transcriptions were adapted for automatic processing. Next, the phonetic transcription of the corpus was created by means of a forced alignment using a lexicon with multiple pronunciation variants per word. These variants were generated by applying phonological and reduction rules to the canonical phonetic transcriptions of the words. The second part of this chapter reports the results of a quantitative analysis of reduction in the corpus on the basis of the generated transcriptions and gives an inventory of segmental reductions in standard Dutch. Overall, we found that reduction is more pervasive in spontaneous Dutch than previously documented. -
Ten Bosch, L., Hämäläinen, A., & Ernestus, M. (2011). Assessing acoustic reduction: Exploiting local structure in speech. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011), Florence, Italy (pp. 2665-2668).
Abstract
This paper presents a method to quantify the spectral characteristics of reduction in speech. Hämäläinen et al. (2009) proposes a measure of spectral reduction which is able to predict a substantial amount of the variation in duration that linguistically motivated variables do not account for. In this paper, we continue studying acoustic reduction in speech by developing a new acoustic measure of reduction, based on local manifold structure in speech. We show that this measure yields significantly improved statistical models for predicting variation in duration. -
Torreira, F., & Ernestus, M. (2011). Realization of voiceless stops and vowels in conversational French and Spanish. Laboratory Phonology, 2(2), 331-353. doi:10.1515/LABPHON.2011.012.
Abstract
The present study compares the realization of intervocalic voiceless stops and vowels surrounded by voiceless stops in conversational Spanish and French. Our data reveal significant differences in how these segments are realized in each language. Spanish voiceless stops tend to have shorter stop closures, display incomplete closures more often, and exhibit more voicing than French voiceless stops. As for vowels, more cases of complete devoicing and greater degrees of partial devoicing were found in French than in Spanish. Moreover, all French vowel types exhibit significantly lower F1 values than their Spanish counterparts. These findings indicate that the extent of reduction that a segment type can undergo in conversational speech can vary significantly across languages. Language differences in coarticulatory strategies and “base-of-articulation” are discussed as possible causes of our observations. -
Torreira, F., & Ernestus, M. (2011). Vowel elision in casual French: The case of vowel /e/ in the word c’était. Journal of Phonetics, 39(1), 50 -58. doi:10.1016/j.wocn.2010.11.003.
Abstract
This study investigates the reduction of vowel /e/ in the French word c’était /setε/ ‘it was’. This reduction phenomenon appeared to be highly frequent, as more than half of the occurrences of this word in a corpus of casual French contained few or no acoustic traces of a vowel between [s] and [t]. All our durational analyses clearly supported a categorical absence of vowel /e/ in a subset of c’était tokens. This interpretation was also supported by our finding that the occurrence of complete elision and [e] duration in non-elision tokens were conditioned by different factors. However, spectral measures were consistent with the possibility that a highly reduced /e/ vowel is still present in elision tokens in spite of the durational evidence for categorical elision. We discuss how these findings can be reconciled, and conclude that acoustic analysis of uncontrolled materials can provide valuable information about the mechanisms underlying reduction phenomena in casual speech. -
De Vaan, L., Ernestus, M., & Schreuder, R. (2011). The lifespan of lexical traces for novel morphologically complex words. The Mental Lexicon, 6, 374-392. doi:10.1075/ml.6.3.02dev.
Abstract
This study investigates the lifespans of lexical traces for novel morphologically complex words. In two visual lexical decision experiments, a neologism was either primed by itself or by its stem. The target occurred 40 trials after the prime (Experiments 1 & 2), after a 12 hour delay (Experiment 1), or after a one week delay (Experiment 2). Participants recognized neologisms more quickly if they had seen them before in the experiment. These results show that memory traces for novel morphologically complex words already come into existence after a very first exposure and that they last for at least a week. We did not find evidence for a role of sleep in the formation of memory traces. Interestingly, Base Frequency appeared to play a role in the processing of the neologisms also when they were presented a second time and had their own memory traces. -
Van de Ven, M., Tucker, B. V., & Ernestus, M. (2011). Semantic context effects in the comprehension of reduced pronunciation variants. Memory & Cognition, 39, 1301-1316. doi:10.3758/s13421-011-0103-2.
Abstract
Listeners require context to understand the highly reduced words that occur in casual speech. The present study reports four auditory lexical decision experiments in which the role of semantic context in the comprehension of reduced versus unreduced speech was investigated. Experiments 1 and 2 showed semantic priming for combinations of unreduced, but not reduced, primes and low-frequency targets. In Experiment 3, we crossed the reduction of the prime with the reduction of the target. Results showed no semantic priming from reduced primes, regardless of the reduction of the targets. Finally, Experiment 4 showed that reduced and unreduced primes facilitate upcoming low-frequency related words equally if the interstimulus interval is extended. These results suggest that semantically related words need more time to be recognized after reduced primes, but once reduced primes have been fully (semantically) processed, these primes can facilitate the recognition of upcoming words as well as do unreduced primes. -
Bürki, A., Ernestus, M., & Frauenfelder, U. H. (2010). Is there only one "fenêtre" in the production lexicon? On-line evidence on the nature of phonological representations of pronunciation variants for French schwa words. Journal of Memory and Language, 62, 421-437. doi:10.1016/j.jml.2010.01.002.
Abstract
This study examines whether the production of words with two phonological variants involves single or multiple lexical phonological representations. Three production experiments investigated the roles of the relative frequencies of the two pronunciation variants of French words with schwa: the schwa variant (e.g., Image ) and the reduced variant (e.g., Image ). In two naming tasks and in a symbol–word association learning task, variants with higher relative frequencies were produced faster. This suggests that the production lexicon keeps a frequency count for each variant and hence that schwa words are represented in the production lexicon with two different lexemes. In addition, the advantage for schwa variants over reduced variants in the naming tasks but not in the learning task and the absence of a variant relative frequency effect for schwa variants produced in isolation support the hypothesis that context affects the variants’ lexical activation and modulates the effect of variant relative frequency. -
Hanique, I., Schuppler, B., & Ernestus, M. (2010). Morphological and predictability effects on schwa reduction: The case of Dutch word-initial syllables. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 933-936).
Abstract
This corpus-based study shows that the presence and duration of schwa in Dutch word-initial syllables are affected by a word’s predictability and its morphological structure. Schwa is less reduced in words that are more predictable given the following word. In addition, schwa may be longer if the syllable forms a prefix, and in prefixes the duration of schwa is positively correlated with the frequency of the word relative to its stem. Our results suggest that the conditions which favor reduced realizations are more complex than one would expect on the basis of the current literature. -
Kuzla, C., Ernestus, M., & Mitterer, H. (2010). Compensation for assimilatory devoicing and prosodic structure in German fricative perception. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (
Eds. ), Laboratory Phonology 10 (pp. 731-757). Berlin: De Gruyter.Files private
Request files -
Pluymaekers, M., Ernestus, M., Baayen, R. H., & Booij, G. (2010). Morphological effects on fine phonetic detail: The case of Dutch -igheid. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (
Eds. ), Laboratory Phonology 10 (pp. 511-532). Berlin: De Gruyter. -
Scharenborg, O., Wan, V., & Ernestus, M. (2010). Unsupervised speech segmentation: An analysis of the hypothesized phone boundaries. Journal of the Acoustical Society of America, 127, 1084-1095. doi:10.1121/1.3277194.
Abstract
Despite using different algorithms, most unsupervised automatic phone segmentation methods achieve similar performance in terms of percentage correct boundary detection. Nevertheless, unsupervised segmentation algorithms are not able to perfectly reproduce manually obtained reference transcriptions. This paper investigates fundamental problems for unsupervised segmentation algorithms by comparing a phone segmentation obtained using only the acoustic information present in the signal with a reference segmentation created by human transcribers. The analyses of the output of an unsupervised speech segmentation method that uses acoustic change to hypothesize boundaries showed that acoustic change is a fairly good indicator of segment boundaries: over two-thirds of the hypothesized boundaries coincide with segment boundaries. Statistical analyses showed that the errors are related to segment duration, sequences of similar segments, and inherently dynamic phones. In order to improve unsupervised automatic speech segmentation, current one-stage bottom-up segmentation methods should be expanded into two-stage segmentation methods that are able to use a mix of bottom-up information extracted from the speech signal and automatically derived top-down information. In this way, unsupervised methods can be improved while remaining flexible and language-independent. -
Schuppler, B., Ernestus, M., Van Dommelen, W., & Koreman, J. (2010). Predicting human perception and ASR classification of word-final [t] by its acoustic sub-segmental properties. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 2466-2469).
Abstract
This paper presents a study on the acoustic sub-segmental properties of word-final /t/ in conversational standard Dutch and how these properties contribute to whether humans and an ASR system classify the /t/ as acoustically present or absent. In general, humans and the ASR system use the same cues (presence of a constriction, a burst, and alveolar frication), but the ASR system is also less sensitive to fine cues (weak bursts, smoothly starting friction) than human listeners and misled by the presence of glottal vibration. These data inform the further development of models of human and automatic speech processing. -
Sikveland, A., Öttl, A., Amdal, I., Ernestus, M., Svendsen, T., & Edlund, J. (2010). Spontal-N: A Corpus of Interactional Spoken Norwegian. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, & D. Tapias (
Eds. ), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 2986-2991). Paris: European Language Resources Association (ELRA).Abstract
Spontal-N is a corpus of spontaneous, interactional Norwegian. To our knowledge, it is the first corpus of Norwegian in which the majority of speakers have spent significant parts of their lives in Sweden, and in which the recorded speech displays varying degrees of interference from Swedish. The corpus consists of studio quality audio- and video-recordings of four 30-minute free conversations between acquaintances, and a manual orthographic transcription of the entire material. On basis of the orthographic transcriptions, we automatically annotated approximately 50 percent of the material on the phoneme level, by means of a forced alignment between the acoustic signal and pronunciations listed in a dictionary. Approximately seven percent of the automatic transcription was manually corrected. Taking the manual correction as a gold standard, we evaluated several sources of pronunciation variants for the automatic transcription. Spontal-N is intended as a general purpose speech resource that is also suitable for investigating phonetic detail. -
Spilková, H., Brenner, D., Öttl, A., Vondřička, P., Van Dommelen, W., & Ernestus, M. (2010). The Kachna L1/L2 picture replication corpus. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, & D. Tapias (
Eds. ), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 2432-2436). Paris: European Language Resources Association (ELRA).Abstract
This paper presents the Kachna corpus of spontaneous speech, in which ten Czech and ten Norwegian speakers were recorded both in their native language and in English. The dialogues are elicited using a picture replication task that requires active cooperation and interaction of speakers by asking them to produce a drawing as close to the original as possible. The corpus is appropriate for the study of interactional features and speech reduction phenomena across native and second languages. The combination of productions in non-native English and in speakers’ native language is advantageous for investigation of L2 issues while providing a L1 behaviour reference from all the speakers. The corpus consists of 20 dialogues comprising 12 hours 53 minutes of recording, and was collected in 2008. Preparation of the transcriptions, including a manual orthographic transcription and an automatically generated phonetic transcription, is currently in progress. The phonetic transcriptions are automatically generated by aligning acoustic models with the speech signal on the basis of the orthographic transcriptions and a dictionary of pronunciation variants compiled for the relevant language. Upon completion the corpus will be made available via the European Language Resources Association (ELRA). -
Torreira, F., & Ernestus, M. (2010). Phrase-medial vowel devoicing in spontaneous French. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 2006-2009).
Abstract
This study investigates phrase-medial vowel devoicing in European French (e.g. /ty po/ [typo] 'you can'). Our spontaneous speech data confirm that French phrase-medial devoicing is a frequent phenomenon affecting high vowels preceded by voiceless consonants. We also found that devoicing is more frequent in temporally reduced and coarticulated vowels. Complete and partial devoicing were conditioned by the same variables (speech rate, consonant type and distance from the end of the AP). Given these results, we propose that phrase-medial vowel devoicing in French arises mainly from the temporal compression of vocalic gestures and the aerodynamic conditions imposed by high vowels. -
Torreira, F., Adda-Decker, M., & Ernestus, M. (2010). The Nijmegen corpus of casual French. Speech Communication, 52, 201-212. doi:10.1016/j.specom.2009.10.004.
Abstract
This article describes the preparation, recording and orthographic transcription of a new speech corpus, the Nijmegen Corpus of Casual French (NCCFr). The corpus contains a total of over 36 h of recordings of 46 French speakers engaged in conversations with friends. Casual speech was elicited during three different parts, which together provided around 90 min of speech from every pair of speakers. While Parts 1 and 2 did not require participants to perform any specific task, in Part 3 participants negotiated a common answer to general questions about society. Comparisons with the ESTER corpus of journalistic speech show that the two corpora contain speech of considerably different registers. A number of indicators of casualness, including swear words, casual words, verlan, disfluencies and word repetitions, are more frequent in the NCCFr than in the ESTER corpus, while the use of double negation, an indicator of formal speech, is less frequent. In general, these estimates of casualness are constant through the three parts of the recording sessions and across speakers. Based on these facts, we conclude that our corpus is a rich resource of highly casual speech, and that it can be effectively exploited by researchers in language science and technology.Files private
Request files -
Torreira, F., & Ernestus, M. (2010). The Nijmegen corpus of casual Spanish. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, & D. Tapias (
Eds. ), Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC'10) (pp. 2981-2985). Paris: European Language Resources Association (ELRA).Abstract
This article describes the preparation, recording and orthographic transcription of a new speech corpus, the Nijmegen Corpus of Casual Spanish (NCCSp). The corpus contains around 30 hours of recordings of 52 Madrid Spanish speakers engaged in conversations with friends. Casual speech was elicited during three different parts, which together provided around ninety minutes of speech from every group of speakers. While Parts 1 and 2 did not require participants to perform any specific task, in Part 3 participants negotiated a common answer to general questions about society. Information about how to obtain a copy of the corpus can be found online at http://mirjamernestus.ruhosting.nl/Ernestus/NCCSp -
Van de Ven, M., Tucker, B. V., & Ernestus, M. (2010). Semantic facilitation in bilingual everyday speech comprehension. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (Interspeech 2010), Makuhari, Japan (pp. 1245-1248).
Abstract
Previous research suggests that bilinguals presented with low and high predictability sentences benefit from semantics in clear but not in conversational speech [1]. In everyday speech, however, many words are not highly predictable. Previous research has shown that native listeners can use also more subtle semantic contextual information [2]. The present study reports two auditory lexical decision experiments investigating to what extent late Asian-English bilinguals benefit from subtle semantic cues in their processing of English unreduced and reduced speech. Our results indicate that these bilinguals are less sensitive to semantic cues than native listeners for both speech registers.
Share this page