Anne Cutler †

Publications

Displaying 1 - 19 of 19
  • Choi, J., Broersma, M., & Cutler, A. (2018). Phonetic learning is not enhanced by sequential exposure to more than one language. Linguistic Research, 35(3), 567-581. doi:10.17250/khisli.35.3.201812.006.

    Abstract

    Several studies have documented that international adoptees, who in early years have
    experienced a change from a language used in their birth country to a new language
    in an adoptive country, benefit from the limited early exposure to the birth language
    when relearning that language’s sounds later in life. The adoptees’ relearning advantages
    have been argued to be conferred by lasting birth-language knowledge obtained from
    the early exposure. However, it is also plausible to assume that the advantages may
    arise from adoptees’ superior ability to learn language sounds in general, as a result
    of their unusual linguistic experience, i.e., exposure to multiple languages in sequence
    early in life. If this is the case, then the adoptees’ relearning benefits should generalize
    to previously unheard language sounds, rather than be limited to their birth-language
    sounds. In the present study, adult Korean adoptees in the Netherlands and matched
    Dutch-native controls were trained on identifying a Japanese length distinction to which
    they had never been exposed before. The adoptees and Dutch controls did not differ
    on any test carried out before, during, or after the training, indicating that observed
    adoptee advantages for birth-language relearning do not generalize to novel, previously
    unheard language sounds. The finding thus fails to support the suggestion that
    birth-language relearning advantages may arise from enhanced ability to learn language
    sounds in general conferred by early experience in multiple languages. Rather, our
    finding supports the original contention that such advantages involve memory traces
    obtained before adoption
  • Johnson, E. K., Bruggeman, L., & Cutler, A. (2018). Abstraction and the (misnamed) language familiarity effect. Cognitive Science, 42, 633-645. doi:10.1111/cogs.12520.

    Abstract

    Talkers are recognized more accurately if they are speaking the listeners’ native language rather than an unfamiliar language. This “language familiarity effect” has been shown not to depend upon comprehension and must instead involve language sound patterns. We further examine the level of sound-pattern processing involved, by comparing talker recognition in foreign languages versus two varieties of English, by (a) English speakers of one variety, (b) English speakers of the other variety, and (c) non-native listeners (more familiar with one of the varieties). All listener groups performed better with native than foreign speech, but no effect of language variety appeared: Native listeners discriminated talkers equally well in each, with the native variety never outdoing the other variety, and non-native listeners discriminated talkers equally poorly in each, irrespective of the variety's familiarity. The results suggest that this talker recognition effect rests not on simple familiarity, but on an abstract level of phonological processing
  • Kidd, E., Junge, C., Spokes, T., Morrison, L., & Cutler, A. (2018). Individual differences in infant speech segmentation: Achieving the lexical shift. Infancy, 23(6), 770-794. doi:10.1111/infa.12256.

    Abstract

    We report a large‐scale electrophysiological study of infant speech segmentation, in which over 100 English‐acquiring 9‐month‐olds were exposed to unfamiliar bisyllabic words embedded in sentences (e.g., He saw a wild eagle up there), after which their brain responses to either the just‐familiarized word (eagle) or a control word (coral) were recorded. When initial exposure occurs in continuous speech, as here, past studies have reported that even somewhat older infants do not reliably recognize target words, but that successful segmentation varies across children. Here, we both confirm and further uncover the nature of this variation. The segmentation response systematically varied across individuals and was related to their vocabulary development. About one‐third of the group showed a left‐frontally located relative negativity in response to familiar versus control targets, which has previously been described as a mature response. Another third showed a similarly located positive‐going reaction (a previously described immature response), and the remaining third formed an intermediate grouping that was primarily characterized by an initial response delay. A fine‐grained group‐level analysis suggested that a developmental shift to a lexical mode of processing occurs toward the end of the first year, with variation across individual infants in the exact timing of this shift.

    Additional information

    supporting information
  • Norris, D., McQueen, J. M., & Cutler, A. (2018). Commentary on “Interaction in spoken word recognition models". Frontiers in Psychology, 9: 1568. doi:10.3389/fpsyg.2018.01568.
  • Broersma, M., & Cutler, A. (2008). Phantom word activation in L2. System, 36(1), 22-34. doi:10.1016/j.system.2007.11.003.

    Abstract

    L2 listening can involve the phantom activation of words which are not actually in the input. All spoken-word recognition involves multiple concurrent activation of word candidates, with selection of the correct words achieved by a process of competition between them. L2 listening involves more such activation than L1 listening, and we report two studies illustrating this. First, in a lexical decision study, L2 listeners accepted (but L1 listeners did not accept) spoken non-words such as groof or flide as real English words. Second, a priming study demonstrated that the same spoken non-words made recognition of the real words groove, flight easier for L2 (but not L1) listeners, suggesting that, for the L2 listeners only, these real words had been activated by the spoken non-word input. We propose that further understanding of the activation and competition process in L2 lexical processing could lead to new understanding of L2 listening difficulty.
  • Cutler, A. (2008). The abstract representations in speech processing. Quarterly Journal of Experimental Psychology, 61(11), 1601-1619. doi:10.1080/13803390802218542.

    Abstract

    Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.
  • Cutler, A., Garcia Lecumberri, M. L., & Cooke, M. (2008). Consonant identification in noise by native and non-native listeners: Effects of local context. Journal of the Acoustical Society of America, 124(2), 1264-1268. doi:10.1121/1.2946707.

    Abstract

    Speech recognition in noise is harder in second (L2) than first languages (L1). This could be because noise disrupts speech processing more in L2 than L1, or because L1 listeners recover better though disruption is equivalent. Two similar prior studies produced discrepant results: Equivalent noise effects for L1 and L2 (Dutch) listeners, versus larger effects for L2 (Spanish) than L1. To explain this, the latter experiment was presented to listeners from the former population. Larger noise effects on consonant identification emerged for L2 (Dutch) than L1 listeners, suggesting that task factors rather than L2 population differences underlie the results discrepancy.
  • Goudbeek, M., Cutler, A., & Smits, R. (2008). Supervised and unsupervised learning of multidimensionally varying nonnative speech categories. Speech Communication, 50(2), 109-125. doi:10.1016/j.specom.2007.07.003.

    Abstract

    The acquisition of novel phonetic categories is hypothesized to be affected by the distributional properties of the input, the relation of the new categories to the native phonology, and the availability of supervision (feedback). These factors were examined in four experiments in which listeners were presented with novel categories based on vowels of Dutch. Distribution was varied such that the categorization depended on the single dimension duration, the single dimension frequency, or both dimensions at once. Listeners were clearly sensitive to the distributional information, but unidimensional contrasts proved easier to learn than multidimensional. The native phonology was varied by comparing Spanish versus American English listeners. Spanish listeners found categorization by frequency easier than categorization by duration, but this was not true of American listeners, whose native vowel system makes more use of duration-based distinctions. Finally, feedback was either available or not; this comparison showed supervised learning to be significantly superior to unsupervised learning.
  • Kim, J., Davis, C., & Cutler, A. (2008). Perceptual tests of rhythmic similarity: II. Syllable rhythm. Language and Speech, 51(4), 343-359. doi:10.1177/0023830908099069.

    Abstract

    To segment continuous speech into its component words, listeners make use of language rhythm; because rhythm differs across languages, so do the segmentation procedures which listeners use. For each of stress-, syllable-and mora-based rhythmic structure, perceptual experiments have led to the discovery of corresponding segmentation procedures. In the case of mora-based rhythm, similar segmentation has been demonstrated in the otherwise unrelated languages Japanese and Telugu; segmentation based on syllable rhythm, however, has been previously demonstrated only for European languages from the Romance family. We here report two target detection experiments in which Korean listeners, presented with speech in Korean and in French, displayed patterns of segmentation like those previously observed in analogous experiments with French listeners. The Korean listeners' accuracy in detecting word-initial target fragments in either language was significantly higher when the fragments corresponded exactly to a syllable in the input than when the fragments were smaller or larger than a syllable. We conclude that Korean and French listeners can call on similar procedures for segmenting speech, and we further propose that perceptual tests of speech segmentation provide a valuable accompaniment to acoustic analyses for establishing languages' rhythmic class membership.
  • Cooper, N., Cutler, A., & Wales, R. (2002). Constraints of lexical stress on lexical access in English: Evidence from native and non-native listeners. Language and Speech, 45(3), 207-228.

    Abstract

    Four cross-modal priming experiments and two forced-choice identification experiments investigated the use of suprasegmental cues to stress in the recognition of spoken English words, by native (English-speaking) and non- native (Dutch) listeners. Previous results had indicated that suprasegmental information was exploited in lexical access by Dutch but not by English listeners. For both listener groups, recognition of visually presented target words was faster, in comparison to a control condition, after stress-matching spoken primes, either monosyllabic (mus- from MUsic /muSEum) or bisyl labic (admi- from ADmiral/admiRAtion). For native listeners, the effect of stress-mismatching bisyllabic primes was not different from that of control primes, but mismatching monosyllabic primes produced partial facilitation. For non-native listeners, both bisyllabic and monosyllabic stress-mismatching primes produced partial facilitation. Native English listeners thus can exploit suprasegmental information in spoken-word recognition, but information from two syllables is used more effectively than information from one syllable. Dutch listeners are less proficient at using suprasegmental information in English than in their native language, but, as in their native language, use mono- and bisyllabic information to an equal extent. In forced-choice identification, Dutch listeners outperformed native listeners at correctly assigning a monosyllabic fragment (e.g., mus-) to one of two words differing in stress.
  • Cutler, A., & Otake, T. (2002). Rhythmic categories in spoken-word recognition. Journal of Memory and Language, 46(2), 296-322. doi:10.1006/jmla.2001.2814.

    Abstract

    Rhythmic categories such as morae in Japanese or stress units in English play a role in the perception of spoken
    language. We examined this role in Japanese, since recent evidence suggests that morae may intervene as
    structural units in word recognition. First, we found that traditional puns more often substituted part of a mora
    than a whole mora. Second, when listeners reconstructed distorted words, e.g. panorama from panozema, responses
    were faster and more accurate when only a phoneme was distorted (panozama, panorema) than when a
    whole CV mora was distorted (panozema). Third, lexical decisions on the same nonwords were better predicted
    by duration and number of phonemes from nonword uniqueness point to word end than by number of morae. Our
    results indicate no role for morae in early spoken-word processing; we propose that rhythmic categories constrain
    not initial lexical activation but subsequent processes of speech segmentation and selection among word candidates.
  • Cutler, A., Demuth, K., & McQueen, J. M. (2002). Universality versus language-specificity in listening to running speech. Psychological Science, 13(3), 258-262. doi:10.1111/1467-9280.00447.

    Abstract

    Recognizing spoken language involves automatic activation of multiple candidate words. The process of selection between candidates is made more efficient by inhibition of embedded words (like egg in beg) that leave a portion of the input stranded (here, b). Results from European languages suggest that this inhibition occurs when consonants are stranded but not when syllables are stranded. The reason why leftover syllables do not lead to inhibition could be that in principle they might themselves be words; in European languages, a syllable can be a word. In Sesotho (a Bantu language), however, a single syllable cannot be a word. We report that in Sesotho, word recognition is inhibited by stranded consonants, but stranded monosyllables produce no more difficulty than stranded bisyllables (which could be Sesotho words). This finding suggests that the viability constraint which inhibits spurious embedded word candidates is not sensitive to language-specific word structure, but is universal.
  • Cutler, A. (2002). Native listeners. European Review, 10(1), 27-41. doi:10.1017/S1062798702000030.

    Abstract

    Becoming a native listener is the necessary precursor to becoming a native speaker. Babies in the first year of life undertake a remarkable amount of work; by the time they begin to speak, they have perceptually mastered the phonological repertoire and phoneme co-occurrence probabilities of the native language, and they can locate familiar word-forms in novel continuous-speech contexts. The skills acquired at this early stage form a necessary part of adult listening. However, the same native listening skills also underlie problems in listening to a late-acquired non-native language, accounting for why in such a case listening (an innate ability) is sometimes paradoxically more difficult than, for instance, reading (a learned ability).
  • Norris, D., McQueen, J. M., & Cutler, A. (2002). Bias effects in facilitatory phonological priming. Memory & Cognition, 30(3), 399-411.

    Abstract

    In four experiments, we examined the facilitation that occurs when spoken-word targets rhyme with preceding spoken primes. In Experiment 1, listeners’ lexical decisions were faster to words following rhyming words (e.g., ramp–LAMP) than to words following unrelated primes (e.g., pink–LAMP). No facilitation was observed for nonword targets. Targets that almost rhymed with their primes (foils; e.g., bulk–SULSH) were included in Experiment 2; facilitation for rhyming targets was severely attenuated. Experiments 3 and 4 were single-word shadowing variants of the earlier experiments. There was facilitation for both rhyming words and nonwords; the presence of foils had no significant influence on the priming effect. A major component of the facilitation in lexical decision appears to be strategic: Listeners are biased to say “yes” to targets that rhyme with their primes, unless foils discourage this strategy. The nonstrategic component of phonological facilitation may reflect speech perception processes that operate prior to lexical access.
  • Spinelli, E., Cutler, A., & McQueen, J. M. (2002). Resolution of liaison for lexical access in French. Revue Française de Linguistique Appliquée, 7, 83-96.

    Abstract

    Spoken word recognition involves automatic activation of lexical candidates compatible with the perceived input. In running speech, words abut one another without intervening gaps, and syllable boundaries can mismatch with word boundaries. For instance, liaison in ’petit agneau’ creates a syllable beginning with a consonant although ’agneau’ begins with a vowel. In two cross-modal priming experiments we investigate how French listeners recognise words in liaison environments. These results suggest that the resolution of liaison in part depends on acoustic cues which distinguish liaison from non-liaison consonants, and in part on the availability of lexical support for a liaison interpretation.
  • Cutler, A., Van Ooijen, B., Norris, D., & Sanchez-Casas, R. (1996). Speeded detection of vowels: A cross-linguistic study. Perception and Psychophysics, 58, 807-822. Retrieved from http://www.psychonomic.org/search/view.cgi?id=430.

    Abstract

    In four experiments, listeners’ response times to detect vowel targets in spoken input were measured. The first three experiments were conducted in English. In two, one using real words and the other, nonwords, detection accuracy was low, targets in initial syllables were detected more slowly than targets in final syllables, and both response time and missed-response rate were inversely correlated with vowel duration. In a third experiment, the speech context for some subjects included all English vowels, while for others, only five relatively distinct vowels occurred. This manipulation had essentially no effect, and the same response pattern was again observed. A fourth experiment, conducted in Spanish, replicated the results in the first three experiments, except that miss rate was here unrelated to vowel duration. We propose that listeners’ responses to vowel targets in naturally spoken input are effectively cautious, reflecting realistic appreciation of vowel variability in natural context.
  • Otake, T., Yoneyama, K., Cutler, A., & van der Lugt, A. (1996). The representation of Japanese moraic nasals. Journal of the Acoustical Society of America, 100, 3831-3842. doi:10.1121/1.417239.

    Abstract

    Nasal consonants in syllabic coda position in Japanese assimilate to the place of articulation of a following consonant. The resulting forms may be perceived as different realizations of a single underlying unit, and indeed the kana orthographies represent them with a single character. In the present study, Japanese listeners' response time to detect nasal consonants was measured. Nasals in coda position, i.e., moraic nasals, were detected faster and more accurately than nonmoraic nasals, as reported in previous studies. The place of articulation with which moraic nasals were realized affected neither response time nor accuracy. Non-native subjects who knew no Japanese, given the same materials with the same instructions, simply failed to respond to moraic nasals which were realized bilabially. When the nasals were cross-spliced across place of articulation contexts the Japanese listeners still showed no significant place of articulation effects, although responses were faster and more accurate to unspliced than to cross-spliced nasals. When asked to detect the phoneme following the (cross-spliced) moraic nasal, Japanese listeners showed effects of mismatch between nasal and context, but non-native listeners did not. Together, these results suggest that Japanese listeners are capable of very rapid abstraction from phonetic realization to a unitary representation of moraic nasals; but they can also use the phonetic realization of a moraic nasal effectively to obtain anticipatory information about following phonemes.
  • Cutler, A., & Foss, D. (1977). On the role of sentence stress in sentence processing. Language and Speech, 20, 1-10.
  • Fay, D., & Cutler, A. (1977). Malapropisms and the structure of the mental lexicon. Linguistic Inquiry, 8, 505-520. Retrieved from http://www.jstor.org/stable/4177997.

Share this page