Displaying 1 - 31 of 31
-
Cutler, A., & Jesse, A. (2021). Word stress in speech perception. In J. S. Pardo, L. C. Nygaard, & D. B. Pisoni (
Eds. ), The handbook of speech perception (2nd ed., pp. 239-265). Chichester: Wiley. -
Cutler, A., Aslin, R. N., Gervain, J., & Nespor, M. (
Eds. ). (2021). Special issue in honor of Jacques Mehler, Cognition's founding editor [Special Issue]. Cognition, 213. -
Cutler, A., Aslin, R. N., Gervain, J., & Nespor, M. (2021). Special issue in honor of Jacques Mehler, Cognition's founding editor [preface]. Cognition, 213: 104786. doi:10.1016/j.cognition.2021.104786.
-
Kember, H., Choi, J., Yu, J., & Cutler, A. (2021). The processing of linguistic prominence. Language and Speech, 64(2), 413-436. doi:10.1177/0023830919880217.
Abstract
Prominence, the expression of informational weight within utterances, can be signaled by
prosodic highlighting (head-prominence, as in English) or by position (as in Korean edge-prominence).
Prominence confers processing advantages, even if conveyed only by discourse manipulations. Here
we compared processing of prominence in English and Korean, using a task that indexes processing
success, namely recognition memory. In each language, participants’ memory was tested for target
words heard in sentences in which they were prominent due to prosody, position, both or neither.
Prominence produced recall advantage, but the relative effects differed across language. For Korean
listeners the positional advantage was greater, but for English listeners prosodic and syntactic
prominence had equivalent and additive effects. In a further experiment semantic and phonological
foils tested depth of processing of the recall targets. Both foil types were correctly rejected,
suggesting that semantic processing had not reached the level at which word form was no longer
available. Together the results suggest that prominence processing is primarily driven by universal
effects of information structure; but language-specific differences in frequency of experience prompt
different relative advantages of prominence signal types. Processing efficiency increases in each case,
however, creating more accurate and more rapidly contactable memory representations. -
Norris, D., & Cutler, A. (2021). More why, less how: What we need from models of cognition. Cognition, 213: 104688. doi:10.1016/j.cognition.2021.104688.
Abstract
Science regularly experiences periods in which simply describing the world is prioritised over attempting to explain it. Cognition, this journal, came into being some 45 years ago as an attempt to lay one such period to rest; without doubt, it has helped create the current cognitive science climate in which theory is decidedly welcome. Here we summarise the reasons why a theoretical approach is imperative in our field, and call attention to some potentially counter-productive trends in which cognitive models are concerned too exclusively with how processes work at the expense of why the processes exist in the first place and thus what the goal of modelling them must be. -
Zhou, W., Broersma, M., & Cutler, A. (2021). Asymmetric memory for birth language perception versus production in young international adoptees. Cognition, 213: 104788. doi:10.1016/j.cognition.2021.104788.
Abstract
Adults who as children were adopted into a different linguistic community retain knowledge of their birth language. The possession (without awareness) of such knowledge is known to facilitate the (re)learning of birth-language speech patterns; this perceptual learning predicts such adults' production success as well, indicating that the retained linguistic knowledge is abstract in nature. Adoptees' acquisition of their adopted language is fast and complete; birth-language mastery disappears rapidly, although this latter process has been little studied. Here, 46 international adoptees from China aged four to 10 years, with Dutch as their new language, plus 47 matched non-adopted Dutch-native controls and 40 matched non-adopted Chinese controls, undertook across a two-week period 10 blocks of training in perceptually identifying Chinese speech contrasts (one segmental, one tonal) which were unlike any Dutch contrasts. Chinese controls easily accomplished all these tasks. The same participants also provided speech production data in an imitation task. In perception, adoptees and Dutch controls scored equivalently poorly at the outset of training; with training, the adoptees significantly improved while the Dutch controls did not. In production, adoptees' imitations both before and after training could be better identified, and received higher goodness ratings, than those of Dutch controls. The perception results confirm that birth-language knowledge is stored and can facilitate re-learning in post-adoption childhood; the production results suggest that although processing of phonological category detail appears to depend on access to the stored knowledge, general articulatory dimensions can at this age also still be remembered, and may facilitate spoken imitation.Additional information
stimulus materials -
Choi, J., Broersma, M., & Cutler, A. (2018). Phonetic learning is not enhanced by sequential exposure to more than one language. Linguistic Research, 35(3), 567-581. doi:10.17250/khisli.35.3.201812.006.
Abstract
Several studies have documented that international adoptees, who in early years have
experienced a change from a language used in their birth country to a new language
in an adoptive country, benefit from the limited early exposure to the birth language
when relearning that language’s sounds later in life. The adoptees’ relearning advantages
have been argued to be conferred by lasting birth-language knowledge obtained from
the early exposure. However, it is also plausible to assume that the advantages may
arise from adoptees’ superior ability to learn language sounds in general, as a result
of their unusual linguistic experience, i.e., exposure to multiple languages in sequence
early in life. If this is the case, then the adoptees’ relearning benefits should generalize
to previously unheard language sounds, rather than be limited to their birth-language
sounds. In the present study, adult Korean adoptees in the Netherlands and matched
Dutch-native controls were trained on identifying a Japanese length distinction to which
they had never been exposed before. The adoptees and Dutch controls did not differ
on any test carried out before, during, or after the training, indicating that observed
adoptee advantages for birth-language relearning do not generalize to novel, previously
unheard language sounds. The finding thus fails to support the suggestion that
birth-language relearning advantages may arise from enhanced ability to learn language
sounds in general conferred by early experience in multiple languages. Rather, our
finding supports the original contention that such advantages involve memory traces
obtained before adoption -
Ip, M. H. K., & Cutler, A. (2018). Asymmetric efficiency of juncture perception in L1 and L2. In K. Klessa, J. Bachan, A. Wagner, M. Karpiński, & D. Śledziński (
Eds. ), Proceedings of Speech Prosody 2018 (pp. 289-296). Baixas, France: ISCA. doi:10.21437/SpeechProsody.2018-59.Abstract
In two experiments, Mandarin listeners resolved potential syntactic ambiguities in spoken utterances in (a) their native language (L1) and (b) English which they had learned as a second language (L2). A new disambiguation task was used, requiring speeded responses to select the correct meaning for structurally ambiguous sentences. Importantly, the ambiguities used in the study are identical in Mandarin and in English, and production data show that prosodic disambiguation of this type of ambiguity is also realised very similarly in the two languages. The perceptual results here showed however that listeners’ response patterns differed for L1 and L2, although there was a significant increase in similarity between the two response patterns with increasing exposure to the L2. Thus identical ambiguity and comparable disambiguation patterns in L1 and L2 do not lead to immediate application of the appropriate L1 listening strategy to L2; instead, it appears that such a strategy may have to be learned anew for the L2. -
Ip, M. H. K., & Cutler, A. (2018). Cue equivalence in prosodic entrainment for focus detection. In J. Epps, J. Wolfe, J. Smith, & C. Jones (
Eds. ), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 153-156).Abstract
Using a phoneme detection task, the present series of
experiments examines whether listeners can entrain to
different combinations of prosodic cues to predict where focus
will fall in an utterance. The stimuli were recorded by four
female native speakers of Australian English who happened to
have used different prosodic cues to produce sentences with
prosodic focus: a combination of duration cues, mean and
maximum F0, F0 range, and longer pre-target interval before
the focused word onset, only mean F0 cues, only pre-target
interval, and only duration cues. Results revealed that listeners
can entrain in almost every condition except for where
duration was the only reliable cue. Our findings suggest that
listeners are flexible in the cues they use for focus processing. -
Cutler, A., Burchfield, L. A., & Antoniou, M. (2018). Factors affecting talker adaptation in a second language. In J. Epps, J. Wolfe, J. Smith, & C. Jones (
Eds. ), Proceedings of the 17th Australasian International Conference on Speech Science and Technology (pp. 33-36).Abstract
Listeners adapt rapidly to previously unheard talkers by
adjusting phoneme categories using lexical knowledge, in a
process termed lexically-guided perceptual learning. Although
this is firmly established for listening in the native language
(L1), perceptual flexibility in second languages (L2) is as yet
less well understood. We report two experiments examining L1
and L2 perceptual learning, the first in Mandarin-English late
bilinguals, the second in Australian learners of Mandarin. Both
studies showed stronger learning in L1; in L2, however,
learning appeared for the English-L1 group but not for the
Mandarin-L1 group. Phonological mapping differences from
the L1 to the L2 are suggested as the reason for this result. -
Cutler, A., & Farrell, J. (2018). Listening in first and second language. In J. I. Liontas (
Ed. ), The TESOL encyclopedia of language teaching. New York: Wiley. doi:10.1002/9781118784235.eelt0583.Abstract
Listeners' recognition of spoken language involves complex decoding processes: The continuous speech stream must be segmented into its component words, and words must be recognized despite great variability in their pronunciation (due to talker differences, or to influence of phonetic context, or to speech register) and despite competition from many spuriously present forms supported by the speech signal. L1 listeners deal more readily with all levels of this complexity than L2 listeners. Fortunately, the decoding processes necessary for competent L2 listening can be taught in the classroom. Evidence-based methodologies targeted at the development of efficient speech decoding include teaching of minimal pairs, of phonotactic constraints, and of reduction processes, as well as the use of dictation and L2 video captions. -
Johnson, E. K., Bruggeman, L., & Cutler, A. (2018). Abstraction and the (misnamed) language familiarity effect. Cognitive Science, 42, 633-645. doi:10.1111/cogs.12520.
Abstract
Talkers are recognized more accurately if they are speaking the listeners’ native language rather than an unfamiliar language. This “language familiarity effect” has been shown not to depend upon comprehension and must instead involve language sound patterns. We further examine the level of sound-pattern processing involved, by comparing talker recognition in foreign languages versus two varieties of English, by (a) English speakers of one variety, (b) English speakers of the other variety, and (c) non-native listeners (more familiar with one of the varieties). All listener groups performed better with native than foreign speech, but no effect of language variety appeared: Native listeners discriminated talkers equally well in each, with the native variety never outdoing the other variety, and non-native listeners discriminated talkers equally poorly in each, irrespective of the variety's familiarity. The results suggest that this talker recognition effect rests not on simple familiarity, but on an abstract level of phonological processing -
Kidd, E., Junge, C., Spokes, T., Morrison, L., & Cutler, A. (2018). Individual differences in infant speech segmentation: Achieving the lexical shift. Infancy, 23(6), 770-794. doi:10.1111/infa.12256.
Abstract
We report a large‐scale electrophysiological study of infant speech segmentation, in which over 100 English‐acquiring 9‐month‐olds were exposed to unfamiliar bisyllabic words embedded in sentences (e.g., He saw a wild eagle up there), after which their brain responses to either the just‐familiarized word (eagle) or a control word (coral) were recorded. When initial exposure occurs in continuous speech, as here, past studies have reported that even somewhat older infants do not reliably recognize target words, but that successful segmentation varies across children. Here, we both confirm and further uncover the nature of this variation. The segmentation response systematically varied across individuals and was related to their vocabulary development. About one‐third of the group showed a left‐frontally located relative negativity in response to familiar versus control targets, which has previously been described as a mature response. Another third showed a similarly located positive‐going reaction (a previously described immature response), and the remaining third formed an intermediate grouping that was primarily characterized by an initial response delay. A fine‐grained group‐level analysis suggested that a developmental shift to a lexical mode of processing occurs toward the end of the first year, with variation across individual infants in the exact timing of this shift.Additional information
supporting information -
Norris, D., McQueen, J. M., & Cutler, A. (2018). Commentary on “Interaction in spoken word recognition models". Frontiers in Psychology, 9: 1568. doi:10.3389/fpsyg.2018.01568.
-
Choi, J., Broersma, M., & Cutler, A. (2015). Enhanced processing of a lost language: Linguistic knowledge or linguistic skill? In Proceedings of Interspeech 2015: 16th Annual Conference of the International Speech Communication Association (pp. 3110-3114).
Abstract
Same-different discrimination judgments for pairs of Korean stop consonants, or of Japanese syllables differing in phonetic segment length, were made by adult Korean adoptees in the Netherlands, by matched Dutch controls, and Korean controls. The adoptees did not outdo either control group on either task, although the same individuals had performed significantly better than matched controls on an identification learning task. This suggests that early exposure to multiple phonetic systems does not specifically improve acoustic-phonetic skills; rather, enhanced performance suggests retained language knowledge. -
Cutler, A. (2015). Lexical stress in English pronunciation. In M. Reed, & J. M. Levis (
Eds. ), The Handbook of English Pronunciation (pp. 106-124). Chichester: Wiley. -
Cutler, A. (2015). Representation of second language phonology. Applied Psycholinguistics, 36(1), 115-128. doi:10.1017/S0142716414000459.
Abstract
Orthographies encode phonological information only at the level of words (chiefly, the information encoded concerns phonetic segments; in some cases, tonal information or default stress may be encoded). Of primary interest to second language (L2) learners is whether orthography can assist in clarifying L2 phonological distinctions that are particularly difficult to perceive (e.g., where one native-language phonemic category captures two L2 categories). A review of spoken-word recognition evidence suggests that orthographic information can install knowledge of such a distinction in lexical representations but that this does not affect learners’ ability to perceive the phonemic distinction in speech. Words containing the difficult phonemes become even harder for L2 listeners to recognize, because perception maps less accurately to lexical content. -
Ernestus, M., & Cutler, A. (2015). BALDEY: A database of auditory lexical decisions. Quarterly Journal of Experimental Psychology, 68, 1469-1488. doi:10.1080/17470218.2014.984730.
Abstract
In an auditory lexical decision experiment, 5,541 spoken content words and pseudo-words were presented to 20 native speakers of Dutch. The words vary in phonological makeup and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudo-words were matched in these respects to the real words. The BALDEY data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbors, and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles and frequency ratings by 70 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses. -
Burnham, D., Ambikairajah, E., Arciuli, J., Bennamoun, M., Best, C. T., Bird, S., Butcher, A. R., Cassidy, S., Chetty, G., Cox, F. M., Cutler, A., Dale, R., Epps, J. R., Fletcher, J. M., Goecke, R., Grayden, D. B., Hajek, J. T., Ingram, J. C., Ishihara, S., Kemp, N. and 10 moreBurnham, D., Ambikairajah, E., Arciuli, J., Bennamoun, M., Best, C. T., Bird, S., Butcher, A. R., Cassidy, S., Chetty, G., Cox, F. M., Cutler, A., Dale, R., Epps, J. R., Fletcher, J. M., Goecke, R., Grayden, D. B., Hajek, J. T., Ingram, J. C., Ishihara, S., Kemp, N., Kinoshita, Y., Kuratate, T., Lewis, T. W., Loakes, D. E., Onslow, M., Powers, D. M., Rose, P., Togneri, R., Tran, D., & Wagner, M. (2009). A blueprint for a comprehensive Australian English auditory-visual speech corpus. In M. Haugh, K. Burridge, J. Mulder, & P. Peters (
Eds. ), Selected proceedings of the 2008 HCSNet Workshop on Designing the Australian National Corpus (pp. 96-107). Somerville, MA: Cascadilla Proceedings Project.Abstract
Large auditory-visual (AV) speech corpora are the grist of modern research in speech science, but no such corpus exists for Australian English. This is unfortunate, for speech science is the brains behind speech technology and applications such as text-to-speech (TTS) synthesis, automatic speech recognition (ASR), speaker recognition and forensic identification, talking heads, and hearing prostheses. Advances in these research areas in Australia require a large corpus of Australian English. Here the authors describe a blueprint for building the Big Australian Speech Corpus (the Big ASC), a corpus of over 1,100 speakers from urban and rural Australia, including speakers of non-indigenous, indigenous, ethnocultural, and disordered forms of Australian English, each of whom would be sampled on three occasions in a range of speech tasks designed by the researchers who would be using the corpus. -
Cutler, A. (2009). Greater sensitivity to prosodic goodness in non-native than in native listeners. Journal of the Acoustical Society of America, 125, 3522-3525. doi:10.1121/1.3117434.
Abstract
English listeners largely disregard suprasegmental cues to stress in recognizing words. Evidence for this includes the demonstration of Fear et al. [J. Acoust. Soc. Am. 97, 1893–1904 (1995)] that cross-splicings are tolerated between stressed and unstressed full vowels (e.g., au- of autumn, automata). Dutch listeners, however, do exploit suprasegmental stress cues in recognizing native-language words. In this study, Dutch listeners were presented with English materials from the study of Fear et al. Acceptability ratings by these listeners revealed sensitivity to suprasegmental mismatch, in particular, in replacements of unstressed full vowels by higher-stressed vowels, thus evincing greater sensitivity to prosodic goodness than had been shown by the original native listener group. -
Cutler, A., Davis, C., & Kim, J. (2009). Non-automaticity of use of orthographic knowledge in phoneme evaluation. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 380-383). Causal Productions Pty Ltd.
Abstract
Two phoneme goodness rating experiments addressed the role of orthographic knowledge in the evaluation of speech sounds. Ratings for the best tokens of /s/ were higher in words spelled with S (e.g., bless) than in words where /s/ was spelled with C (e.g., voice). This difference did not appear for analogous nonwords for which every lexical neighbour had either S or C spelling (pless, floice). Models of phonemic processing incorporating obligatory influence of lexical information in phonemic processing cannot explain this dissociation; the data are consistent with models in which phonemic decisions are not subject to necessary top-down lexical influence. -
Cutler, A. (2009). Psycholinguistics in our time. In P. Rabbitt (
Ed. ), Inside psychology: A science over 50 years (pp. 91-101). Oxford: Oxford University Press. -
Cutler, A., Otake, T., & McQueen, J. M. (2009). Vowel devoicing and the perception of spoken Japanese words. Journal of the Acoustical Society of America, 125(3), 1693-1703. doi:10.1121/1.3075556.
Abstract
Three experiments, in which Japanese listeners detected Japanese words embedded in nonsense sequences, examined the perceptual consequences of vowel devoicing in that language. Since vowelless sequences disrupt speech segmentation [Norris et al. (1997). Cognit. Psychol. 34, 191– 243], devoicing is potentially problematic for perception. Words in initial position in nonsense sequences were detected more easily when followed by a sequence containing a vowel than by a vowelless segment (with or without further context), and vowelless segments that were potential devoicing environments were no easier than those not allowing devoicing. Thus asa, “morning,” was easier in asau or asazu than in all of asap, asapdo, asaf, or asafte, despite the fact that the /f/ in the latter two is a possible realization of fu, with devoiced [u]. Japanese listeners thus do not treat devoicing contexts as if they always contain vowels. Words in final position in nonsense sequences, however, produced a different pattern: here, preceding vowelless contexts allowing devoicing impeded word detection less strongly (so, sake was detected less accurately, but not less rapidly, in nyaksake—possibly arising from nyakusake—than in nyagusake). This is consistent with listeners treating consonant sequences as potential realizations of parts of existing lexical candidates wherever possible. -
Kooijman, V., Hagoort, P., & Cutler, A. (2009). Prosodic structure in early word segmentation: ERP evidence from Dutch ten-month-olds. Infancy, 14, 591 -612. doi:10.1080/15250000903263957.
Abstract
Recognizing word boundaries in continuous speech requires detailed knowledge of the native language. In the first year of life, infants acquire considerable word segmentation abilities. Infants at this early stage in word segmentation rely to a large extent on the metrical pattern of their native language, at least in stress-based languages. In Dutch and English (both languages with a preferred trochaic stress pattern), segmentation of strong-weak words develops rapidly between 7 and 10 months of age. Nevertheless, trochaic languages contain not only strong-weak words but also words with a weak-strong stress pattern. In this article, we present electrophysiological evidence of the beginnings of weak-strong word segmentation in Dutch 10-month-olds. At this age, the ability to combine different cues for efficient word segmentation does not yet seem to be completely developed. We provide evidence that Dutch infants still largely rely on strong syllables, even for the segmentation of weak-strong words. -
Tyler, M., & Cutler, A. (2009). Cross-language differences in cue use for speech segmentation. Journal of the Acoustical Society of America, 126, 367-376. doi:10.1121/1.3129127.
Abstract
Two artificial-language learning experiments directly compared English, French, and Dutch listeners’ use of suprasegmental cues for continuous-speech segmentation. In both experiments, listeners heard unbroken sequences of consonant-vowel syllables, composed of recurring three- and four-syllable “words.” These words were demarcated by(a) no cue other than transitional probabilities induced by their recurrence, (b) a consistent left-edge cue, or (c) a consistent right-edge cue. Experiment 1 examined a vowel lengthening cue. All three listener groups benefited from this cue in right-edge position; none benefited from it in left-edge position. Experiment 2 examined a pitch-movement cue. English listeners used this cue in left-edge position, French listeners used it in right-edge position, and Dutch listeners used it in both positions. These findings are interpreted as evidence of both language-universal and language-specific effects. Final lengthening is a language-universal effect expressing a more general (non-linguistic) mechanism. Pitch movement expresses prominence which has characteristically different placements across languages: typically at right edges in French, but at left edges in English and Dutch. Finally, stress realization in English versus Dutch encourages greater attention to suprasegmental variation by Dutch than by English listeners, allowing Dutch listeners to benefit from an informative pitch-movement cue even in an uncharacteristic position. -
Cutler, A., & Foss, D. (1977). On the role of sentence stress in sentence processing. Language and Speech, 20, 1-10.
-
Cutler, A. (1977). The context-dependence of "intonational meanings". In W. Beach, S. Fox, & S. Philosoph (
Eds. ), Papers from the Thirteenth Regional Meeting, Chicago Linguistic Society (pp. 104-115). Chicago, Ill.: CLS. -
Cutler, A. (1977). The psychological reality of word formation and lexical stress rules. In E. Fischer-Jørgensen, J. Rischel, & N. Thorsen (
Eds. ), Proceedings of the Ninth International Congress of Phonetic Sciences: Vol. 2 (pp. 79-85). Copenhagen: Institute of Phonetics, University of Copenhagen. -
Fay, D., & Cutler, A. (1977). Malapropisms and the structure of the mental lexicon. Linguistic Inquiry, 8, 505-520. Retrieved from http://www.jstor.org/stable/4177997.
-
Cutler, A. (1976). High-stress words are easier to perceive than low-stress words, even when they are equally stressed. Texas Linguistic Forum, 2, 53-57.
-
Cutler, A. (1976). Phoneme-monitoring reaction time as a function of preceding intonation contour. Perception and Psychophysics, 20, 55-60. Retrieved from http://www.psychonomic.org/search/view.cgi?id=18194.
Abstract
An acoustically invariant one-word segment occurred in two versions of one syntactic context. In one version, the preceding intonation contour indicated that a stress would fall at the point where this word occurred. In the other version, the preceding contour predicted reduced stress at that point. Reaction time to the initial phoneme of the word was faster in the former case, despite the fact that no acoustic correlates of stress were present. It is concluded that a part of the sentence comprehension process is the prediction of upcoming sentence accents.
Share this page