Displaying 1 - 31 of 31
-
Cutler, A., & Jesse, A. (2021). Word stress in speech perception. In J. S. Pardo, L. C. Nygaard, & D. B. Pisoni (
Eds. ), The handbook of speech perception (2nd ed., pp. 239-265). Chichester: Wiley. -
Cutler, A., Aslin, R. N., Gervain, J., & Nespor, M. (2021). Special issue in honor of Jacques Mehler, Cognition's founding editor [preface]. Cognition, 213: 104786. doi:10.1016/j.cognition.2021.104786.
-
Kember, H., Choi, J., Yu, J., & Cutler, A. (2021). The processing of linguistic prominence. Language and Speech, 64(2), 413-436. doi:10.1177/0023830919880217.
Abstract
Prominence, the expression of informational weight within utterances, can be signaled by
prosodic highlighting (head-prominence, as in English) or by position (as in Korean edge-prominence).
Prominence confers processing advantages, even if conveyed only by discourse manipulations. Here
we compared processing of prominence in English and Korean, using a task that indexes processing
success, namely recognition memory. In each language, participants’ memory was tested for target
words heard in sentences in which they were prominent due to prosody, position, both or neither.
Prominence produced recall advantage, but the relative effects differed across language. For Korean
listeners the positional advantage was greater, but for English listeners prosodic and syntactic
prominence had equivalent and additive effects. In a further experiment semantic and phonological
foils tested depth of processing of the recall targets. Both foil types were correctly rejected,
suggesting that semantic processing had not reached the level at which word form was no longer
available. Together the results suggest that prominence processing is primarily driven by universal
effects of information structure; but language-specific differences in frequency of experience prompt
different relative advantages of prominence signal types. Processing efficiency increases in each case,
however, creating more accurate and more rapidly contactable memory representations. -
Norris, D., & Cutler, A. (2021). More why, less how: What we need from models of cognition. Cognition, 213: 104688. doi:10.1016/j.cognition.2021.104688.
Abstract
Science regularly experiences periods in which simply describing the world is prioritised over attempting to explain it. Cognition, this journal, came into being some 45 years ago as an attempt to lay one such period to rest; without doubt, it has helped create the current cognitive science climate in which theory is decidedly welcome. Here we summarise the reasons why a theoretical approach is imperative in our field, and call attention to some potentially counter-productive trends in which cognitive models are concerned too exclusively with how processes work at the expense of why the processes exist in the first place and thus what the goal of modelling them must be. -
Zhou, W., Broersma, M., & Cutler, A. (2021). Asymmetric memory for birth language perception versus production in young international adoptees. Cognition, 213: 104788. doi:10.1016/j.cognition.2021.104788.
Abstract
Adults who as children were adopted into a different linguistic community retain knowledge of their birth language. The possession (without awareness) of such knowledge is known to facilitate the (re)learning of birth-language speech patterns; this perceptual learning predicts such adults' production success as well, indicating that the retained linguistic knowledge is abstract in nature. Adoptees' acquisition of their adopted language is fast and complete; birth-language mastery disappears rapidly, although this latter process has been little studied. Here, 46 international adoptees from China aged four to 10 years, with Dutch as their new language, plus 47 matched non-adopted Dutch-native controls and 40 matched non-adopted Chinese controls, undertook across a two-week period 10 blocks of training in perceptually identifying Chinese speech contrasts (one segmental, one tonal) which were unlike any Dutch contrasts. Chinese controls easily accomplished all these tasks. The same participants also provided speech production data in an imitation task. In perception, adoptees and Dutch controls scored equivalently poorly at the outset of training; with training, the adoptees significantly improved while the Dutch controls did not. In production, adoptees' imitations both before and after training could be better identified, and received higher goodness ratings, than those of Dutch controls. The perception results confirm that birth-language knowledge is stored and can facilitate re-learning in post-adoption childhood; the production results suggest that although processing of phonological category detail appears to depend on access to the stored knowledge, general articulatory dimensions can at this age also still be remembered, and may facilitate spoken imitation.Additional information
stimulus materials -
Choi, J., Broersma, M., & Cutler, A. (2018). Phonetic learning is not enhanced by sequential exposure to more than one language. Linguistic Research, 35(3), 567-581. doi:10.17250/khisli.35.3.201812.006.
Abstract
Several studies have documented that international adoptees, who in early years have
experienced a change from a language used in their birth country to a new language
in an adoptive country, benefit from the limited early exposure to the birth language
when relearning that language’s sounds later in life. The adoptees’ relearning advantages
have been argued to be conferred by lasting birth-language knowledge obtained from
the early exposure. However, it is also plausible to assume that the advantages may
arise from adoptees’ superior ability to learn language sounds in general, as a result
of their unusual linguistic experience, i.e., exposure to multiple languages in sequence
early in life. If this is the case, then the adoptees’ relearning benefits should generalize
to previously unheard language sounds, rather than be limited to their birth-language
sounds. In the present study, adult Korean adoptees in the Netherlands and matched
Dutch-native controls were trained on identifying a Japanese length distinction to which
they had never been exposed before. The adoptees and Dutch controls did not differ
on any test carried out before, during, or after the training, indicating that observed
adoptee advantages for birth-language relearning do not generalize to novel, previously
unheard language sounds. The finding thus fails to support the suggestion that
birth-language relearning advantages may arise from enhanced ability to learn language
sounds in general conferred by early experience in multiple languages. Rather, our
finding supports the original contention that such advantages involve memory traces
obtained before adoption -
Cutler, A., & Farrell, J. (2018). Listening in first and second language. In J. I. Liontas (
Ed. ), The TESOL encyclopedia of language teaching. New York: Wiley. doi:10.1002/9781118784235.eelt0583.Abstract
Listeners' recognition of spoken language involves complex decoding processes: The continuous speech stream must be segmented into its component words, and words must be recognized despite great variability in their pronunciation (due to talker differences, or to influence of phonetic context, or to speech register) and despite competition from many spuriously present forms supported by the speech signal. L1 listeners deal more readily with all levels of this complexity than L2 listeners. Fortunately, the decoding processes necessary for competent L2 listening can be taught in the classroom. Evidence-based methodologies targeted at the development of efficient speech decoding include teaching of minimal pairs, of phonotactic constraints, and of reduction processes, as well as the use of dictation and L2 video captions. -
Johnson, E. K., Bruggeman, L., & Cutler, A. (2018). Abstraction and the (misnamed) language familiarity effect. Cognitive Science, 42, 633-645. doi:10.1111/cogs.12520.
Abstract
Talkers are recognized more accurately if they are speaking the listeners’ native language rather than an unfamiliar language. This “language familiarity effect” has been shown not to depend upon comprehension and must instead involve language sound patterns. We further examine the level of sound-pattern processing involved, by comparing talker recognition in foreign languages versus two varieties of English, by (a) English speakers of one variety, (b) English speakers of the other variety, and (c) non-native listeners (more familiar with one of the varieties). All listener groups performed better with native than foreign speech, but no effect of language variety appeared: Native listeners discriminated talkers equally well in each, with the native variety never outdoing the other variety, and non-native listeners discriminated talkers equally poorly in each, irrespective of the variety's familiarity. The results suggest that this talker recognition effect rests not on simple familiarity, but on an abstract level of phonological processing -
Kidd, E., Junge, C., Spokes, T., Morrison, L., & Cutler, A. (2018). Individual differences in infant speech segmentation: Achieving the lexical shift. Infancy, 23(6), 770-794. doi:10.1111/infa.12256.
Abstract
We report a large‐scale electrophysiological study of infant speech segmentation, in which over 100 English‐acquiring 9‐month‐olds were exposed to unfamiliar bisyllabic words embedded in sentences (e.g., He saw a wild eagle up there), after which their brain responses to either the just‐familiarized word (eagle) or a control word (coral) were recorded. When initial exposure occurs in continuous speech, as here, past studies have reported that even somewhat older infants do not reliably recognize target words, but that successful segmentation varies across children. Here, we both confirm and further uncover the nature of this variation. The segmentation response systematically varied across individuals and was related to their vocabulary development. About one‐third of the group showed a left‐frontally located relative negativity in response to familiar versus control targets, which has previously been described as a mature response. Another third showed a similarly located positive‐going reaction (a previously described immature response), and the remaining third formed an intermediate grouping that was primarily characterized by an initial response delay. A fine‐grained group‐level analysis suggested that a developmental shift to a lexical mode of processing occurs toward the end of the first year, with variation across individual infants in the exact timing of this shift.Additional information
supporting information -
Norris, D., McQueen, J. M., & Cutler, A. (2018). Commentary on “Interaction in spoken word recognition models". Frontiers in Psychology, 9: 1568. doi:10.3389/fpsyg.2018.01568.
-
Broersma, M., & Cutler, A. (2011). Competition dynamics of second-language listening. Quarterly Journal of Experimental Psychology, 64, 74-95. doi:10.1080/17470218.2010.499174.
Abstract
Spoken-word recognition in a nonnative language is particularly difficult where it depends on discrimination between confusable phonemes. Four experiments here examine whether this difficulty is in part due to phantom competition from “near-words” in speech. Dutch listeners confuse English /aelig/ and /ε/, which could lead to the sequence daf being interpreted as deaf, or lemp being interpreted as lamp. In auditory lexical decision, Dutch listeners indeed accepted such near-words as real English words more often than English listeners did. In cross-modal priming, near-words extracted from word or phrase contexts (daf from DAFfodil, lemp from eviL EMPire) induced activation of corresponding real words (deaf; lamp) for Dutch, but again not for English, listeners. Finally, by the end of untruncated carrier words containing embedded words or near-words (definite; daffodil) no activation of the real embedded forms (deaf in definite) remained for English or Dutch listeners, but activation of embedded near-words (deaf in daffodil) did still remain, for Dutch listeners only. Misinterpretation of the initial vowel here favoured the phantom competitor and disfavoured the carrier (lexically represented as containing a different vowel). Thus, near-words compete for recognition and continue competing for longer than actually embedded words; nonnative listening indeed involves phantom competition. -
Cutler, A. (2011). Listening to REAL second language. AATSEEL Newsletter, 54(3), 14.
-
Johnson, E. K., Westrek, E., Nazzi, T., & Cutler, A. (2011). Infant ability to tell voices apart rests on language experience. Developmental Science, 14(5), 1002-1011. doi:10.1111/j.1467-7687.2011.01052.x.
Abstract
A visual fixation study tested whether seven-month-olds can discriminate between different talkers. The infants were first habituated to talkers producing sentences in either a familiar or unfamiliar language, then heard test sentences from previously unheard speakers, either in the language used for habituation, or in another language. When the language at test mismatched that in habituation, infants always noticed the change. When language remained constant and only talker altered, however, infants detected the change only if the language was the native tongue. Adult listeners with a different native tongue than the infants did not reproduce the discriminability patterns shown by the infants, and infants detected neither voice nor language changes in reversed speech; both these results argue against explanation of the native-language voice discrimination in terms of acoustic properties of the stimuli. The ability to identify talkers is, like many other perceptual abilities, strongly influenced by early life experience. -
Tuinman, A., & Cutler, A. (2011). L1 knowledge and the perception of casual speech processes in L2. In M. Wrembel, M. Kul, & K. Dziubalska-Kolaczyk (
Eds. ), Achievements and perspectives in SLA of speech: New Sounds 2010. Volume I (pp. 289-301). Frankfurt am Main: Peter Lang.Abstract
Every language manifests casual speech processes, and hence every second language too. This study examined how listeners deal with second-language casual speech processes, as a function of the processes in their native language. We compared a match case, where a second-language process t/-reduction) is also operative in native speech, with a mismatch case, where a second-language process (/r/-insertion) is absent from native speech. In each case native and non-native listeners judged stimuli in which a given phoneme (in sentence context) varied along a continuum from absent to present. Second-language listeners in general mimicked native performance in the match case, but deviated significantly from native performance in the mismatch case. Together these results make it clear that the mapping from first to second language is as important in the interpretation of casual speech processes as in other dimensions of speech perception. Unfamiliar casual speech processes are difficult to adapt to in a second language. Casual speech processes that are already familiar from native speech, however, are easy to adapt to; indeed, our results even suggest that it is possible for subtle difference in their occurrence patterns across the two languages to be detected,and to be accommodated to in second-language listening -
Tuinman, A., Mitterer, H., & Cutler, A. (2011). Perception of intrusive /r/ in English by native, cross-language and cross-dialect listeners. Journal of the Acoustical Society of America, 130, 1643-1652. doi:10.1121/1.3619793.
Abstract
In sequences such as law and order, speakers of British English often insert /r/ between law and and. Acoustic analyses revealed such “intrusive” /r/ to be significantly shorter than canonical /r/. In a 2AFC experiment, native listeners heard British English sentences in which /r/ duration was manipulated across a word boundary [e.g., saw (r)ice], and orthographic and semantic factors were varied. These listeners responded categorically on the basis of acoustic evidence for /r/ alone, reporting ice after short /r/s, rice after long /r/s; orthographic and semantic factors had no effect. Dutch listeners proficient in English who heard the same materials relied less on durational cues than the native listeners, and were affected by both orthography and semantic bias. American English listeners produced intermediate responses to the same materials, being sensitive to duration (less so than native, more so than Dutch listeners), and to orthography (less so than the Dutch), but insensitive to the semantic manipulation. Listeners from language communities without common use of intrusive /r/ may thus interpret intrusive /r/ as canonical /r/, with a language difference increasing this propensity more than a dialect difference. Native listeners, however, efficiently distinguish intrusive from canonical /r/ by exploiting the relevant acoustic variation. -
Cutler, A. (2009). Greater sensitivity to prosodic goodness in non-native than in native listeners. Journal of the Acoustical Society of America, 125, 3522-3525. doi:10.1121/1.3117434.
Abstract
English listeners largely disregard suprasegmental cues to stress in recognizing words. Evidence for this includes the demonstration of Fear et al. [J. Acoust. Soc. Am. 97, 1893–1904 (1995)] that cross-splicings are tolerated between stressed and unstressed full vowels (e.g., au- of autumn, automata). Dutch listeners, however, do exploit suprasegmental stress cues in recognizing native-language words. In this study, Dutch listeners were presented with English materials from the study of Fear et al. Acceptability ratings by these listeners revealed sensitivity to suprasegmental mismatch, in particular, in replacements of unstressed full vowels by higher-stressed vowels, thus evincing greater sensitivity to prosodic goodness than had been shown by the original native listener group. -
Cutler, A. (2009). Psycholinguistics in our time. In P. Rabbitt (
Ed. ), Inside psychology: A science over 50 years (pp. 91-101). Oxford: Oxford University Press. -
Cutler, A., Otake, T., & McQueen, J. M. (2009). Vowel devoicing and the perception of spoken Japanese words. Journal of the Acoustical Society of America, 125(3), 1693-1703. doi:10.1121/1.3075556.
Abstract
Three experiments, in which Japanese listeners detected Japanese words embedded in nonsense sequences, examined the perceptual consequences of vowel devoicing in that language. Since vowelless sequences disrupt speech segmentation [Norris et al. (1997). Cognit. Psychol. 34, 191– 243], devoicing is potentially problematic for perception. Words in initial position in nonsense sequences were detected more easily when followed by a sequence containing a vowel than by a vowelless segment (with or without further context), and vowelless segments that were potential devoicing environments were no easier than those not allowing devoicing. Thus asa, “morning,” was easier in asau or asazu than in all of asap, asapdo, asaf, or asafte, despite the fact that the /f/ in the latter two is a possible realization of fu, with devoiced [u]. Japanese listeners thus do not treat devoicing contexts as if they always contain vowels. Words in final position in nonsense sequences, however, produced a different pattern: here, preceding vowelless contexts allowing devoicing impeded word detection less strongly (so, sake was detected less accurately, but not less rapidly, in nyaksake—possibly arising from nyakusake—than in nyagusake). This is consistent with listeners treating consonant sequences as potential realizations of parts of existing lexical candidates wherever possible. -
Kooijman, V., Hagoort, P., & Cutler, A. (2009). Prosodic structure in early word segmentation: ERP evidence from Dutch ten-month-olds. Infancy, 14, 591 -612. doi:10.1080/15250000903263957.
Abstract
Recognizing word boundaries in continuous speech requires detailed knowledge of the native language. In the first year of life, infants acquire considerable word segmentation abilities. Infants at this early stage in word segmentation rely to a large extent on the metrical pattern of their native language, at least in stress-based languages. In Dutch and English (both languages with a preferred trochaic stress pattern), segmentation of strong-weak words develops rapidly between 7 and 10 months of age. Nevertheless, trochaic languages contain not only strong-weak words but also words with a weak-strong stress pattern. In this article, we present electrophysiological evidence of the beginnings of weak-strong word segmentation in Dutch 10-month-olds. At this age, the ability to combine different cues for efficient word segmentation does not yet seem to be completely developed. We provide evidence that Dutch infants still largely rely on strong syllables, even for the segmentation of weak-strong words. -
Tyler, M., & Cutler, A. (2009). Cross-language differences in cue use for speech segmentation. Journal of the Acoustical Society of America, 126, 367-376. doi:10.1121/1.3129127.
Abstract
Two artificial-language learning experiments directly compared English, French, and Dutch listeners’ use of suprasegmental cues for continuous-speech segmentation. In both experiments, listeners heard unbroken sequences of consonant-vowel syllables, composed of recurring three- and four-syllable “words.” These words were demarcated by(a) no cue other than transitional probabilities induced by their recurrence, (b) a consistent left-edge cue, or (c) a consistent right-edge cue. Experiment 1 examined a vowel lengthening cue. All three listener groups benefited from this cue in right-edge position; none benefited from it in left-edge position. Experiment 2 examined a pitch-movement cue. English listeners used this cue in left-edge position, French listeners used it in right-edge position, and Dutch listeners used it in both positions. These findings are interpreted as evidence of both language-universal and language-specific effects. Final lengthening is a language-universal effect expressing a more general (non-linguistic) mechanism. Pitch movement expresses prominence which has characteristically different placements across languages: typically at right edges in French, but at left edges in English and Dutch. Finally, stress realization in English versus Dutch encourages greater attention to suprasegmental variation by Dutch than by English listeners, allowing Dutch listeners to benefit from an informative pitch-movement cue even in an uncharacteristic position. -
Connine, C. M., Clifton, Jr., C., & Cutler, A. (1987). Effects of lexical stress on phonetic categorization. Phonetica, 44, 133-146.
-
Cutler, A., Norris, D., & Williams, J. (1987). A note on the role of phonological expectations in speech segmentation. Journal of Memory and Language, 26, 480-487. doi:10.1016/0749-596X(87)90103-3.
Abstract
Word-initial CVC syllables are detected faster in words beginning consonant-vowel-consonant-vowel (CVCV-) than in words beginning consonant-vowel-consonant-consonant (CVCC-). This effect was reported independently by M. Taft and G. Hambly (1985, Journal of Memory and Language, 24, 320–335) and by A. Cutler, J. Mehler, D. Norris, and J. Segui (1986, Journal of Memory and Language, 25, 385–400). Taft and Hambly explained the effect in terms of lexical factors. This explanation cannot account for Cutler et al.'s results, in which the effect also appeared with nonwords and foreign words. Cutler et al. suggested that CVCV-sequences might simply be easier to perceive than CVCC-sequences. The present study confirms this suggestion, and explains it as a reflection of listener expectations constructed on the basis of distributional characteristics of the language. -
Cutler, A., Mehler, J., Norris, D., & Segui, J. (1987). Phoneme identification and the lexicon. Cognitive Psychology, 19, 141-177. doi:10.1016/0010-0285(87)90010-7.
-
Cutler, A. (1987). Speaking for listening. In A. Allport, D. MacKay, W. Prinz, & E. Scheerer (
Eds. ), Language perception and production: Relationships between listening, speaking, reading and writing (pp. 23-40). London: Academic Press.Abstract
Speech production is constrained at all levels by the demands of speech perception. The speaker's primary aim is successful communication, and to this end semantic, syntactic and lexical choices are directed by the needs of the listener. Even at the articulatory level, some aspects of production appear to be perceptually constrained, for example the blocking of phonological distortions under certain conditions. An apparent exception to this pattern is word boundary information, which ought to be extremely useful to listeners, but which is not reliably coded in speech. It is argued that the solution to this apparent problem lies in rethinking the concept of the boundary of the lexical access unit. Speech rhythm provides clear information about the location of stressed syllables, and listeners do make use of this information. If stressed syllables can serve as the determinants of word lexical access codes, then once again speakers are providing precisely the necessary form of speech information to facilitate perception. -
Cutler, A., Butterfield, S., & Williams, J. (1987). The perceptual integrity of syllabic onsets. Journal of Memory and Language, 26, 406-418. doi:10.1016/0749-596X(87)90099-4.
-
Cutler, A., & Carter, D. (1987). The predominance of strong initial syllables in the English vocabulary. Computer Speech and Language, 2, 133-142. doi:10.1016/0885-2308(87)90004-0.
Abstract
Studies of human speech processing have provided evidence for a segmentation strategy in the perception of continuous speech, whereby a word boundary is postulated, and a lexical access procedure initiated, at each metrically strong syllable. The likely success of this strategy was here estimated against the characteristics of the English vocabulary. Two computerized dictionaries were found to list approximately three times as many words beginning with strong syllables (i.e. syllables containing a full vowel) as beginning with weak syllables (i.e. syllables containing a reduced vowel). Consideration of frequency of lexical word occurrence reveals that words beginning with strong syllables occur on average more often than words beginning with weak syllables. Together, these findings motivate an estimate for everyday speech recognition that approximately 85% of lexical words (i.e. excluding function words) will begin with strong syllables. This estimate was tested against a corpus of 190 000 words of spontaneous British English conversion. In this corpus, 90% of lexical words were found to begin with strong syllables. This suggests that a strategy of postulating word boundaries at the onset of strong syllables would have a high success rate in that few actual lexical word onsets would be missed. -
Cutler, A. (1987). The task of the speaker and the task of the hearer [Commentary/Sperber & Wilson: Relevance]. Behavioral and Brain Sciences, 10, 715-716.
-
Cutler, A., & Foss, D. (1977). On the role of sentence stress in sentence processing. Language and Speech, 20, 1-10.
-
Fay, D., & Cutler, A. (1977). Malapropisms and the structure of the mental lexicon. Linguistic Inquiry, 8, 505-520. Retrieved from http://www.jstor.org/stable/4177997.
-
Cutler, A. (1976). High-stress words are easier to perceive than low-stress words, even when they are equally stressed. Texas Linguistic Forum, 2, 53-57.
-
Cutler, A. (1976). Phoneme-monitoring reaction time as a function of preceding intonation contour. Perception and Psychophysics, 20, 55-60. Retrieved from http://www.psychonomic.org/search/view.cgi?id=18194.
Abstract
An acoustically invariant one-word segment occurred in two versions of one syntactic context. In one version, the preceding intonation contour indicated that a stress would fall at the point where this word occurred. In the other version, the preceding contour predicted reduced stress at that point. Reaction time to the initial phoneme of the word was faster in the former case, despite the fact that no acoustic correlates of stress were present. It is concluded that a part of the sentence comprehension process is the prediction of upcoming sentence accents.
Share this page