Displaying 1 - 19 of 19
-
Bruggeman, L., & Cutler, A. (2019). The dynamics of lexical activation and competition in bilinguals’ first versus second language. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (
Eds. ), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1342-1346). Canberra, Australia: Australasian Speech Science and Technology Association Inc.Abstract
Speech input causes listeners to activate multiple
candidate words which then compete with one
another. These include onset competitors, that share a
beginning (bumper, butter), but also, counterintuitively,
rhyme competitors, sharing an ending
(bumper, jumper). In L1, competition is typically
stronger for onset than for rhyme. In L2, onset
competition has been attested but rhyme competition
has heretofore remained largely unexamined. We
assessed L1 (Dutch) and L2 (English) word
recognition by the same late-bilingual individuals. In
each language, eye gaze was recorded as listeners
heard sentences and viewed sets of drawings: three
unrelated, one depicting an onset or rhyme competitor
of a word in the input. Activation patterns revealed
substantial onset competition but no significant
rhyme competition in either L1 or L2. Rhyme
competition may thus be a “luxury” feature of
maximally efficient listening, to be abandoned when
resources are scarcer, as in listening by late
bilinguals, in either language. -
Cutler, A., Burchfield, A., & Antoniou, M. (2019). A criterial interlocutor tally for successful talker adaptation? In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (
Eds. ), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1485-1489). Canberra, Australia: Australasian Speech Science and Technology Association Inc.Abstract
Part of the remarkable efficiency of listening is
accommodation to unfamiliar talkers’ specific
pronunciations by retuning of phonemic intercategory
boundaries. Such retuning occurs in second
(L2) as well as first language (L1); however, recent
research with emigrés revealed successful adaptation
in the environmental L2 but, unprecedentedly, not in
L1 despite continuing L1 use. A possible explanation
involving relative exposure to novel talkers is here
tested in heritage language users with Mandarin as
family L1 and English as environmental language. In
English, exposure to an ambiguous sound in
disambiguating word contexts prompted the expected
adjustment of phonemic boundaries in subsequent
categorisation. However, no adjustment occurred in
Mandarin, again despite regular use. Participants
reported highly asymmetric interlocutor counts in the
two languages. We conclude that successful retuning
ability requires regular exposure to novel talkers in
the language in question, a criterion not met for the
emigrés’ or for these heritage users’ L1. -
Joo, H., Jang, J., Kim, S., Cho, T., & Cutler, A. (2019). Prosodic structural effects on coarticulatory vowel nasalization in Australian English in comparison to American English. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (
Eds. ), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 835-839). Canberra, Australia: Australasian Speech Science and Technology Association Inc.Abstract
This study investigates effects of prosodic factors (prominence, boundary) on coarticulatory Vnasalization in Australian English (AusE) in CVN and NVC in comparison to those in American English
(AmE). As in AmE, prominence was found to
lengthen N, but to reduce V-nasalization, enhancing N’s nasality and V’s orality, respectively (paradigmatic contrast enhancement). But the prominence effect in CVN was more robust than that in AmE. Again similar to findings in AmE, boundary
induced a reduction of N-duration and V-nasalization phrase-initially (syntagmatic contrast enhancement), and increased the nasality of both C and V phrasefinally.
But AusE showed some differences in terms
of the magnitude of V nasalization and N duration. The results suggest that the linguistic contrast enhancements underlie prosodic-structure modulation of coarticulatory V-nasalization in
comparable ways across dialects, while the fine phonetic detail indicates that the phonetics-prosody interplay is internalized in the individual dialect’s phonetic grammar. -
Ip, M. H. K., & Cutler, A. (2018). Asymmetric efficiency of juncture perception in L1 and L2. In K. Klessa, J. Bachan, A. Wagner, M. Karpiński, & D. Śledziński (
Eds. ), Proceedings of Speech Prosody 2018 (pp. 289-296). Baixas, France: ISCA. doi:10.21437/SpeechProsody.2018-59.Abstract
In two experiments, Mandarin listeners resolved potential syntactic ambiguities in spoken utterances in (a) their native language (L1) and (b) English which they had learned as a second language (L2). A new disambiguation task was used, requiring speeded responses to select the correct meaning for structurally ambiguous sentences. Importantly, the ambiguities used in the study are identical in Mandarin and in English, and production data show that prosodic disambiguation of this type of ambiguity is also realised very similarly in the two languages. The perceptual results here showed however that listeners’ response patterns differed for L1 and L2, although there was a significant increase in similarity between the two response patterns with increasing exposure to the L2. Thus identical ambiguity and comparable disambiguation patterns in L1 and L2 do not lead to immediate application of the appropriate L1 listening strategy to L2; instead, it appears that such a strategy may have to be learned anew for the L2. -
Ip, M. H. K., & Cutler, A. (2018). Cue equivalence in prosodic entrainment for focus detection. In J. Epps, J. Wolfe, J. Smith, & C. Jones (
Eds. ), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 153-156).Abstract
Using a phoneme detection task, the present series of
experiments examines whether listeners can entrain to
different combinations of prosodic cues to predict where focus
will fall in an utterance. The stimuli were recorded by four
female native speakers of Australian English who happened to
have used different prosodic cues to produce sentences with
prosodic focus: a combination of duration cues, mean and
maximum F0, F0 range, and longer pre-target interval before
the focused word onset, only mean F0 cues, only pre-target
interval, and only duration cues. Results revealed that listeners
can entrain in almost every condition except for where
duration was the only reliable cue. Our findings suggest that
listeners are flexible in the cues they use for focus processing. -
Cutler, A., Burchfield, L. A., & Antoniou, M. (2018). Factors affecting talker adaptation in a second language. In J. Epps, J. Wolfe, J. Smith, & C. Jones (
Eds. ), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 33-36).Abstract
Listeners adapt rapidly to previously unheard talkers by
adjusting phoneme categories using lexical knowledge, in a
process termed lexically-guided perceptual learning. Although
this is firmly established for listening in the native language
(L1), perceptual flexibility in second languages (L2) is as yet
less well understood. We report two experiments examining L1
and L2 perceptual learning, the first in Mandarin-English late
bilinguals, the second in Australian learners of Mandarin. Both
studies showed stronger learning in L1; in L2, however,
learning appeared for the English-L1 group but not for the
Mandarin-L1 group. Phonological mapping differences from
the L1 to the L2 are suggested as the reason for this result. -
Choi, J., Broersma, M., & Cutler, A. (2015). Enhanced processing of a lost language: Linguistic knowledge or linguistic skill? In Proceedings of Interspeech 2015: 16th Annual Conference of the International Speech Communication Association (pp. 3110-3114).
Abstract
Same-different discrimination judgments for pairs of Korean stop consonants, or of Japanese syllables differing in phonetic segment length, were made by adult Korean adoptees in the Netherlands, by matched Dutch controls, and Korean controls. The adoptees did not outdo either control group on either task, although the same individuals had performed significantly better than matched controls on an identification learning task. This suggests that early exposure to multiple phonetic systems does not specifically improve acoustic-phonetic skills; rather, enhanced performance suggests retained language knowledge. -
Cutler, A., & Chen, H.-C. (1995). Phonological similarity effects in Cantonese word recognition. In K. Elenius, & P. Branderud (
Eds. ), Proceedings of the Thirteenth International Congress of Phonetic Sciences: Vol. 1 (pp. 106-109). Stockholm: Stockholm University.Abstract
Two lexical decision experiments in Cantonese are described in which the recognition of spoken target words as a function of phonological similarity to a preceding prime is investigated. Phonological similaritv in first syllables produced inhibition, while similarity in second syllables led to facilitation. Differences between syllables in tonal and segmental structure had generally similar effects. -
Cutler, A. (1995). Universal and Language-Specific in the Development of Speech. Biology International, (Special Issue 33).
Additional information
http://www.iubs.org/?id=34 -
Otake, T., Davis, S. M., & Cutler, A. (1995). Listeners’ representations of within-word structure: A cross-linguistic and cross-dialectal investigation. In J. Pardo (
Ed. ), Proceedings of EUROSPEECH 95: Vol. 3 (pp. 1703-1706). Madrid: European Speech Communication Association.Abstract
Japanese, British English and American English listeners were presented with spoken words in their native language, and asked to mark on a written transcript of each word the first natural division point in the word. The results showed clear and strong patterns of consensus, indicating that listeners have available to them conscious representations of within-word structure. Orthography did not play a strongly deciding role in the results. The patterns of response were at variance with results from on-line studies of speech segmentation, suggesting that the present task taps not those representations used in on-line listening, but levels of representation which may involve much richer knowledge of word-internal structure. -
Allerhand, M., Butterfield, S., Cutler, A., & Patterson, R. (1992). Assessing syllable strength via an auditory model. In Proceedings of the Institute of Acoustics: Vol. 14 Part 6 (pp. 297-304). St. Albans, Herts: Institute of Acoustics.
-
Cutler, A., Kearns, R., Norris, D., & Scott, D. (1992). Listeners’ responses to extraneous signals coincident with English and French speech. In J. Pittam (
Ed. ), Proceedings of the 4th Australian International Conference on Speech Science and Technology (pp. 666-671). Canberra: Australian Speech Science and Technology Association.Abstract
English and French listeners performed two tasks - click location and speeded click detection - with both English and French sentences, closely matched for syntactic and phonological structure. Clicks were located more accurately in open- than in closed-class words in both English and French; they were detected more rapidly in open- than in closed-class words in English, but not in French. The two listener groups produced the same pattern of responses, suggesting that higher-level linguistic processing was not involved in these tasks. -
Cutler, A., & Robinson, T. (1992). Response time as a metric for comparison of speech recognition by humans and machines. In J. Ohala, T. Neary, & B. Derwing (
Eds. ), Proceedings of the Second International Conference on Spoken Language Processing: Vol. 1 (pp. 189-192). Alberta: University of Alberta.Abstract
The performance of automatic speech recognition systems is usually assessed in terms of error rate. Human speech recognition produces few errors, but relative difficulty of processing can be assessed via response time techniques. We report the construction of a measure analogous to response time in a machine recognition system. This measure may be compared directly with human response times. We conducted a trial comparison of this type at the phoneme level, including both tense and lax vowels and a variety of consonant classes. The results suggested similarities between human and machine processing in the case of consonants, but differences in the case of vowels. -
McQueen, J. M., & Cutler, A. (1992). Words within words: Lexical statistics and lexical access. In J. Ohala, T. Neary, & B. Derwing (
Eds. ), Proceedings of the Second International Conference on Spoken Language Processing: Vol. 1 (pp. 221-224). Alberta: University of Alberta.Abstract
This paper presents lexical statistics on the pattern of occurrence of words embedded in other words. We report the results of an analysis of 25000 words, varying in length from two to six syllables, extracted from a phonetically-coded English dictionary (The Longman Dictionary of Contemporary English). Each syllable, and each string of syllables within each word was checked against the dictionary. Two analyses are presented: the first used a complete list of polysyllables, with look-up on the entire dictionary; the second used a sublist of content words, counting only embedded words which were themselves content words. The results have important implications for models of human speech recognition. The efficiency of these models depends, in different ways, on the number and location of words within words. -
Norris, D., Van Ooijen, B., & Cutler, A. (1992). Speeded detection of vowels and steady-state consonants. In J. Ohala, T. Neary, & B. Derwing (
Eds. ), Proceedings of the Second International Conference on Spoken Language Processing; Vol. 2 (pp. 1055-1058). Alberta: University of Alberta.Abstract
We report two experiments in which vowels and steady-state consonants served as targets in a speeded detection task. In the first experiment, two vowels were compared with one voiced and once unvoiced fricative. Response times (RTs) to the vowels were longer than to the fricatives. The error rate was higher for the consonants. Consonants in word-final position produced the shortest RTs, For the vowels, RT correlated negatively with target duration. In the second experiment, the same two vowel targets were compared with two nasals. This time there was no significant difference in RTs, but the error rate was still significantly higher for the consonants. Error rate and length correlated negatively for the vowels only. We conclude that RT differences between phonemes are independent of vocalic or consonantal status. Instead, we argue that the process of phoneme detection reflects more finely grained differences in acoustic/articulatory structure within the phonemic repertoire. -
Cutler, A., & Butterfield, S. (1989). Natural speech cues to word segmentation under difficult listening conditions. In J. Tubach, & J. Mariani (
Eds. ), Proceedings of Eurospeech 89: European Conference on Speech Communication and Technology: Vol. 2 (pp. 372-375). Edinburgh: CEP Consultants.Abstract
One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In three experiments, we examined how word boundaries are produced in deliberately clear speech. We found that speakers do indeed attempt to mark word boundaries; moreover, they differentiate between word boundaries in a way which suggests they are sensitive to listener needs. Application of heuristic segmentation strategies makes word boundaries before strong syllables easiest for listeners to perceive; but under difficult listening conditions speakers pay more attention to marking word boundaries before weak syllables, i.e. they mark those boundaries which are otherwise particularly hard to perceive. -
Cutler, A. (1987). Components of prosodic effects in speech recognition. In Proceedings of the Eleventh International Congress of Phonetic Sciences: Vol. 1 (pp. 84-87). Tallinn: Academy of Sciences of the Estonian SSR, Institute of Language and Literature.
Abstract
Previous research has shown that listeners use the prosodic structure of utterances in a predictive fashion in sentence comprehension, to direct attention to accented words. Acoustically identical words spliced into sentence contexts arc responded to differently if the prosodic structure of the context is \ aricd: when the preceding prosody indicates that the word will he accented, responses are faster than when the preceding prosodv is inconsistent with accent occurring on that word. In the present series of experiments speech hybridisation techniques were first used to interchange the timing patterns within pairs of prosodic variants of utterances, independently of the pitch and intensity contours. The time-adjusted utterances could then serve as a basis lor the orthogonal manipulation of the three prosodic dimensions of pilch, intensity and rhythm. The overall pattern of results showed that when listeners use prosody to predict accent location, they do not simply rely on a single prosodic dimension, hut exploit the interaction between pitch, intensity and rhythm. -
Cutler, A., & Carter, D. (1987). The prosodic structure of initial syllables in English. In J. Laver, & M. Jack (
Eds. ), Proceedings of the European Conference on Speech Technology: Vol. 1 (pp. 207-210). Edinburgh: IEE. -
Cutler, A. (1974). On saying what you mean without meaning what you say. In M. Galy, R. Fox, & A. Bruck (
Eds. ), Papers from the Tenth Regional Meeting, Chicago Linguistic Society (pp. 117-127). Chicago, Ill.: CLS.
Share this page