Anne Cutler †

Publications

Displaying 1 - 32 of 32
  • Cutler, A. (2010). Abstraction-based efficiency in the lexicon. Laboratory Phonology, 1(2), 301-318. doi:10.1515/LABPHON.2010.016.

    Abstract

    Listeners learn from their past experience of listening to spoken words, and use this learning to maximise the efficiency of future word recognition. This paper summarises evidence that the facilitatory effects of drawing on past experience are mediated by abstraction, enabling learning to be generalised across new words and new listening situations. Phoneme category retuning, which allows adaptation to speaker-specific articulatory characteristics, is generalised on the basis of relatively brief experience to words previously unheard from that speaker. Abstract knowledge of prosodic regularities is applied to recognition even of novel words for which these regularities were violated. Prosodic word-boundary regularities drive segmentation of speech into words independently of the membership of the lexical candidate set resulting from the segmentation operation. Each of these different cases illustrates how abstraction from past listening experience has contributed to the efficiency of lexical recognition.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (2010). Strategic deployment of orthographic knowledge in phoneme detection. Language and Speech, 53(3), 307 -320. doi:10.1177/0023830910371445.

    Abstract

    The phoneme detection task is widely used in spoken-word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realized. Listeners detected the target sounds [b, m, t, f, s, k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b, m, t], which have consistent word-initial spelling, than to the targets [f, s, k], which are inconsistently spelled, but only when spelling was rendered salient by the presence in the experiment of many irregularly spelled filler words. Within the inconsistent targets [f, s, k], there was no significant difference between responses to targets in words with more usual (foam, seed, cattle) versus less usual (phone, cede, kettle) spellings. Phoneme detection is thus not necessarily sensitive to orthographic effects; knowledge of spelling stored in the lexical representations of words does not automatically become available as word candidates are activated. However, salient orthographic manipulations in experimental input can induce such sensitivity. We attribute this to listeners' experience of the value of spelling in everyday situations that encourage phonemic decisions (such as learning new names)
  • Cutler, A., Cooke, M., & Lecumberri, M. L. G. (2010). Preface. Speech Communication, 52, 863. doi:10.1016/j.specom.2010.11.003.

    Abstract

    Adverse listening conditions always make the perception of speech harder, but their deleterious effect is far greater if the speech we are trying to understand is in a non-native language. An imperfect signal can be coped with by recourse to the extensive knowledge one has of a native language, and imperfect knowledge of a non-native language can still support useful communication when speech signals are high-quality. But the combination of imperfect signal and imperfect knowledge leads rapidly to communication breakdown. This phenomenon is undoubtedly well known to every reader of Speech Communication from personal experience. Many readers will also have a professional interest in explaining, or remedying, the problems it produces. The journal’s readership being a decidedly interdisciplinary one, this interest will involve quite varied scientific approaches, including (but not limited to) modelling the interaction of first and second language vocabularies and phonemic repertoires, developing targeted listening training for language learners, and redesigning the acoustics of classrooms and conference halls. In other words, the phenomenon that this special issue deals with is a well-known one, that raises important scientific and practical questions across a range of speech communication disciplines, and Speech Communication is arguably the ideal vehicle for presentation of such a breadth of approaches in a single volume. The call for papers for this issue elicited a large number of submissions from across the full range of the journal’s interdisciplinary scope, requiring the guest editors to apply very strict criteria to the final selection. Perhaps unique in the history of treatments of this topic is the combination represented by the guest editors for this issue: a phonetician whose primary research interest is in second-language speech (MLGL), an engineer whose primary research field is the acoustics of masking in speech processing (MC), and a psychologist whose primary research topic is the recognition of spoken words (AC). In the opening article of the issue, these three authors together review the existing literature on listening to second-language speech under adverse conditions, bringing together these differing perspectives for the first time in a single contribution. The introductory review is followed by 13 new experimental reports of phonetic, acoustic and psychological studies of the topic. The guest editors thank Speech Communication editor Marc Swerts and the journal’s team at Elsevier, as well as all the reviewers who devoted time and expert efforts to perfecting the contributions to this issue.
  • Lecumberri, M. L. G., Cooke, M., & Cutler, A. (2010). Non-native speech perception in adverse conditions: A review. Speech Communication, 52, 864-886. doi:10.1016/j.specom.2010.08.014.

    Abstract

    If listening in adverse conditions is hard, then listening in a foreign language is doubly so: non-native listeners have to cope with both imperfect signals and imperfect knowledge. Comparison of native and non-native listener performance in speech-in-noise tasks helps to clarify the role of prior linguistic experience in speech perception, and, more directly, contributes to an understanding of the problems faced by language learners in everyday listening situations. This article reviews experimental studies on non-native listening in adverse conditions, organised around three principal contributory factors: the task facing listeners, the effect of adverse conditions on speech, and the differences among listener populations. Based on a comprehensive tabulation of key studies, we identify robust findings, research trends and gaps in current knowledge.
  • Broersma, M., & Cutler, A. (2008). Phantom word activation in L2. System, 36(1), 22-34. doi:10.1016/j.system.2007.11.003.

    Abstract

    L2 listening can involve the phantom activation of words which are not actually in the input. All spoken-word recognition involves multiple concurrent activation of word candidates, with selection of the correct words achieved by a process of competition between them. L2 listening involves more such activation than L1 listening, and we report two studies illustrating this. First, in a lexical decision study, L2 listeners accepted (but L1 listeners did not accept) spoken non-words such as groof or flide as real English words. Second, a priming study demonstrated that the same spoken non-words made recognition of the real words groove, flight easier for L2 (but not L1) listeners, suggesting that, for the L2 listeners only, these real words had been activated by the spoken non-word input. We propose that further understanding of the activation and competition process in L2 lexical processing could lead to new understanding of L2 listening difficulty.
  • Cutler, A. (2008). The abstract representations in speech processing. Quarterly Journal of Experimental Psychology, 61(11), 1601-1619. doi:10.1080/13803390802218542.

    Abstract

    Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.
  • Cutler, A., Garcia Lecumberri, M. L., & Cooke, M. (2008). Consonant identification in noise by native and non-native listeners: Effects of local context. Journal of the Acoustical Society of America, 124(2), 1264-1268. doi:10.1121/1.2946707.

    Abstract

    Speech recognition in noise is harder in second (L2) than first languages (L1). This could be because noise disrupts speech processing more in L2 than L1, or because L1 listeners recover better though disruption is equivalent. Two similar prior studies produced discrepant results: Equivalent noise effects for L1 and L2 (Dutch) listeners, versus larger effects for L2 (Spanish) than L1. To explain this, the latter experiment was presented to listeners from the former population. Larger noise effects on consonant identification emerged for L2 (Dutch) than L1 listeners, suggesting that task factors rather than L2 population differences underlie the results discrepancy.
  • Goudbeek, M., Cutler, A., & Smits, R. (2008). Supervised and unsupervised learning of multidimensionally varying nonnative speech categories. Speech Communication, 50(2), 109-125. doi:10.1016/j.specom.2007.07.003.

    Abstract

    The acquisition of novel phonetic categories is hypothesized to be affected by the distributional properties of the input, the relation of the new categories to the native phonology, and the availability of supervision (feedback). These factors were examined in four experiments in which listeners were presented with novel categories based on vowels of Dutch. Distribution was varied such that the categorization depended on the single dimension duration, the single dimension frequency, or both dimensions at once. Listeners were clearly sensitive to the distributional information, but unidimensional contrasts proved easier to learn than multidimensional. The native phonology was varied by comparing Spanish versus American English listeners. Spanish listeners found categorization by frequency easier than categorization by duration, but this was not true of American listeners, whose native vowel system makes more use of duration-based distinctions. Finally, feedback was either available or not; this comparison showed supervised learning to be significantly superior to unsupervised learning.
  • Kim, J., Davis, C., & Cutler, A. (2008). Perceptual tests of rhythmic similarity: II. Syllable rhythm. Language and Speech, 51(4), 343-359. doi:10.1177/0023830908099069.

    Abstract

    To segment continuous speech into its component words, listeners make use of language rhythm; because rhythm differs across languages, so do the segmentation procedures which listeners use. For each of stress-, syllable-and mora-based rhythmic structure, perceptual experiments have led to the discovery of corresponding segmentation procedures. In the case of mora-based rhythm, similar segmentation has been demonstrated in the otherwise unrelated languages Japanese and Telugu; segmentation based on syllable rhythm, however, has been previously demonstrated only for European languages from the Romance family. We here report two target detection experiments in which Korean listeners, presented with speech in Korean and in French, displayed patterns of segmentation like those previously observed in analogous experiments with French listeners. The Korean listeners' accuracy in detecting word-initial target fragments in either language was significantly higher when the fragments corresponded exactly to a syllable in the input than when the fragments were smaller or larger than a syllable. We conclude that Korean and French listeners can call on similar procedures for segmenting speech, and we further propose that perceptual tests of speech segmentation provide a valuable accompaniment to acoustic analyses for establishing languages' rhythmic class membership.
  • Cutler, A. (Ed.). (2005). Twenty-first century psycholinguistics: Four cornerstones. Mahwah, NJ: Erlbaum.
  • Cutler, A., Smits, R., & Cooper, N. (2005). Vowel perception: Effects of non-native language vs. non-native dialect. Speech Communication, 47(1-2), 32-42. doi:10.1016/j.specom.2005.02.001.

    Abstract

    Three groups of listeners identified the vowel in CV and VC syllables produced by an American English talker. The listeners were (a) native speakers of American English, (b) native speakers of Australian English (different dialect), and (c) native speakers of Dutch (different language). The syllables were embedded in multispeaker babble at three signal-to-noise ratios (0 dB, 8 dB, and 16 dB). The identification performance of native listeners was significantly better than that of listeners with another language but did not significantly differ from the performance of listeners with another dialect. Dialect differences did however affect the type of perceptual confusions which listeners made; in particular, the Australian listeners’ judgements of vowel tenseness were more variable than the American listeners’ judgements, which may be ascribed to cross-dialectal differences in this vocalic feature. Although listening difficulty can result when speech input mismatches the native dialect in terms of the precise cues for and boundaries of phonetic categories, the difficulty is very much less than that which arises when speech input mismatches the native language in terms of the repertoire of phonemic categories available.
  • Cutler, A. (2005). Why is it so hard to understand a second language in noise? Newsletter, American Association of Teachers of Slavic and East European Languages, 48, 16-16.
  • Cutler, A. (Ed.). (2005). Twenty-first century psycholinguistics: Four cornerstones. Hillsdale, NJ: Erlbaum.
  • Kooijman, V., Hagoort, P., & Cutler, A. (2005). Electrophysiological evidence for prelinguistic infants' word recognition in continuous speech. Cognitive Brain Research, 24(1), 109-116. doi:10.1016/j.cogbrainres.2004.12.009.

    Abstract

    Children begin to talk at about age one. The vocabulary they need to do so must be built on perceptual evidence and, indeed, infants begin to recognize spoken words long before they talk. Most of the utterances infants hear, however, are continuous, without pauses between words, so constructing a vocabulary requires them to decompose continuous speech in order to extract the individual words. Here, we present electrophysiological evidence that 10-month-old infants recognize two-syllable words they have previously heard only in isolation when these words are presented anew in continuous speech. Moreover, they only need roughly the first syllable of the word to begin doing this. Thus, prelinguistic infants command a highly efficient procedure for segmentation and recognition of spoken words in the absence of an existing vocabulary, allowing them to tackle effectively the problem of bootstrapping a lexicon out of the highly variable, continuous speech signals in their environment.
  • Sharp, D. J., Scott, S. K., Cutler, A., & Wise, R. J. S. (2005). Lexical retrieval constrained by sound structure: The role of the left inferior frontal gyrus. Brain and Language, 92(3), 309-319. doi:10.1016/j.bandl.2004.07.002.

    Abstract

    Positron emission tomography was used to investigate two competing hypotheses about the role of the left inferior frontal gyrus (IFG) in word generation. One proposes a domain-specific organization, with neural activation dependent on the type of information being processed, i.e., surface sound structure or semantic. The other proposes a process-specific organization, with activation dependent on processing demands, such as the amount of selection needed to decide between competing lexical alternatives. In a novel word retrieval task, word reconstruction (WR), subjects generated real words from heard non-words by the substitution of either a vowel or consonant. Both types of lexical retrieval, informed by sound structure alone, produced activation within anterior and posterior left IFG regions. Within these regions there was greater activity for consonant WR, which is more difficult and imposes greater processing demands. These results support a process-specific organization of the anterior left IFG.
  • Van Donselaar, W., Koster, M., & Cutler, A. (2005). Exploring the role of lexical stress in lexical recognition. Quarterly Journal of Experimental Psychology, 58A(2), 251-273. doi:10.1080/02724980343000927.

    Abstract

    Three cross-modal priming experiments examined the role of suprasegmental information in the processing of spoken words. All primes consisted of truncated spoken Dutch words. Recognition of visually presented word targets was facilitated by prior auditory presentation of the first two syllables of the same words as primes, but only if they were appropriately stressed (e.g., OKTOBER preceded by okTO-); inappropriate stress, compatible with another word (e.g., OKTOBER preceded by OCto-, the beginning of octopus), produced inhibition. Monosyllabic fragments (e.g., OC-) also produced facilitation when appropriately stressed; if inappropriately stressed, they produced neither facilitation nor inhibition. The bisyllabic fragments that were compatible with only one word produced facilitation to semantically associated words, but inappropriate stress caused no inhibition of associates. The results are explained within a model of spoken-word recognition involving competition between simultaneously activated phonological representations followed by activation of separate conceptual representations for strongly supported lexical candidates; at the level of the phonological representations, activation is modulated by both segmental and suprasegmental information.
  • Warner, N., Smits, R., McQueen, J. M., & Cutler, A. (2005). Phonological and statistical effects on timing of speech perception: Insights from a database of Dutch diphone perception. Speech Communication, 46(1), 53-72. doi:10.1016/j.specom.2005.01.003.

    Abstract

    We report detailed analyses of a very large database on timing of speech perception collected by Smits et al. (Smits, R., Warner, N., McQueen, J.M., Cutler, A., 2003. Unfolding of phonetic information over time: A database of Dutch diphone perception. J. Acoust. Soc. Am. 113, 563–574). Eighteen listeners heard all possible diphones of Dutch, gated in portions of varying size and presented without background noise. The present report analyzes listeners’ responses across gates in terms of phonological features (voicing, place, and manner for consonants; height, backness, and length for vowels). The resulting patterns for feature perception differ from patterns reported when speech is presented in noise. The data are also analyzed for effects of stress and of phonological context (neighboring vowel vs. consonant); effects of these factors are observed to be surprisingly limited. Finally, statistical effects, such as overall phoneme frequency and transitional probabilities, along with response biases, are examined; these too exercise only limited effects on response patterns. The results suggest highly accurate speech perception on the basis of acoustic information alone.
  • Warner, N., Kim, J., Davis, C., & Cutler, A. (2005). Use of complex phonological patterns in speech processing: Evidence from Korean. Journal of Linguistics, 41(2), 353-387. doi:10.1017/S0022226705003294.

    Abstract

    Korean has a very complex phonology, with many interacting alternations. In a coronal-/i/ sequence, depending on the type of phonological boundary present, alternations such as palatalization, nasal insertion, nasal assimilation, coda neutralization, and intervocalic voicing can apply. This paper investigates how the phonological patterns of Korean affect processing of morphemes and words. Past research on languages such as English, German, Dutch, and Finnish has shown that listeners exploit syllable structure constraints in processing speech and segmenting it into words. The current study shows that in parsing speech, listeners also use much more complex patterns that relate the surface phonological string to various boundaries.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Weber, A., & Cutler, A. (2004). Lexical competition in non-native spoken-word recognition. Journal of Memory and Language, 50(1), 1-25. doi:10.1016/S0749-596X(03)00105-0.

    Abstract

    Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target panda) than on less confusable distractors (beetle, given target bottle). English listeners showed no such viewing time difference. The confusability was asymmetric: given pencil as target, panda did not distract more than distinct competitors. Distractors with Dutch names phonologically related to English target names (deksel, ‘lid,’ given target desk) also received longer fixations than distractors with phonologically unrelated names. Again, English listeners showed no differential effect. With the materials translated into Dutch, Dutch listeners showed no activation of the English words (desk, given target deksel). The results motivate two conclusions: native phonemic categories capture second-language input even when stored representations maintain a second-language distinction; and lexical competition is greater for non-native than for native listeners.
  • Akker, E., & Cutler, A. (2003). Prosodic cues to semantic structure in native and nonnative listening. Bilingualism: Language and Cognition, 6(2), 81-96. doi:10.1017/S1366728903001056.

    Abstract

    Listeners efficiently exploit sentence prosody to direct attention to words bearing sentence accent. This effect has been explained as a search for focus, furthering rapid apprehension of semantic structure. A first experiment supported this explanation: English listeners detected phoneme targets in sentences more rapidly when the target-bearing words were in accented position or in focussed position, but the two effects interacted, consistent with the claim that the effects serve a common cause. In a second experiment a similar asymmetry was observed with Dutch listeners and Dutch sentences. In a third and a fourth experiment, proficient Dutch users of English heard English sentences; here, however, the two effects did not interact. The results suggest that less efficient mapping of prosody to semantics may be one way in which nonnative listening fails to equal native listening.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • McQueen, J. M., Cutler, A., & Norris, D. (2003). Flow of information in the spoken word recognition system. Speech Communication, 41(1), 257-270. doi:10.1016/S0167-6393(02)00108-5.

    Abstract

    Spoken word recognition consists of two major component processes. First, at the prelexical stage, an abstract description of the utterance is generated from the information in the speech signal. Second, at the lexical stage, this description is used to activate all the words stored in the mental lexicon which match the input. These multiple candidate words then compete with each other. We review evidence which suggests that positive (match) and negative (mismatch) information of both a segmental and a suprasegmental nature is used to constrain this activation and competition process. We then ask whether, in addition to the necessary influence of the prelexical stage on the lexical stage, there is also feedback from the lexicon to the prelexical level. In two phonetic categorization experiments, Dutch listeners were asked to label both syllable-initial and syllable-final ambiguous fricatives (e.g., sounds ranging from [f] to [s]) in the word–nonword series maf–mas, and the nonword–word series jaf–jas. They tended to label the sounds in a lexically consistent manner (i.e., consistent with the word endpoints of the series). These lexical effects became smaller in listeners’ slower responses, even when the listeners were put under pressure to respond as fast as possible. Our results challenge models of spoken word recognition in which feedback modulates the prelexical analysis of the component sounds of a word whenever that word is heard
  • Norris, D., McQueen, J. M., & Cutler, A. (2003). Perceptual learning in speech. Cognitive Psychology, 47(2), 204-238. doi:10.1016/S0010-0285(03)00006-9.

    Abstract

    This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WI tlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]–[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).
  • Smits, R., Warner, N., McQueen, J. M., & Cutler, A. (2003). Unfolding of phonetic information over time: A database of Dutch diphone perception. Journal of the Acoustical Society of America, 113(1), 563-574. doi:10.1121/1.1525287.

    Abstract

    We present the results of a large-scale study on speech perception, assessing the number and type of perceptual hypotheses which listeners entertain about possible phoneme sequences in their language. Dutch listeners were asked to identify gated fragments of all 1179 diphones of Dutch, providing a total of 488 520 phoneme categorizations. The results manifest orderly uptake of acoustic information in the signal. Differences across phonemes in the rate at which fully correct recognition was achieved arose as a result of whether or not potential confusions could occur with other phonemes of the language ~long with short vowels, affricates with their initial components, etc.!. These data can be used to improve models of how acoustic phonetic information is mapped onto the mental lexicon during speech comprehension.
  • Spinelli, E., McQueen, J. M., & Cutler, A. (2003). Processing resyllabified words in French. Journal of Memory and Language, 48(2), 233-254. doi:10.1016/S0749-596X(02)00513-2.
  • Weber, A., & Cutler, A. (2003). Perceptual similarity co-existing with lexical dissimilarity [Abstract]. Abstracts of the 146th Meeting of the Acoustical Society of America. Journal of the Acoustical Society of America, 114(4 Pt. 2), 2422. doi:10.1121/1.1601094.

    Abstract

    The extreme case of perceptual similarity is indiscriminability, as when two second‐language phonemes map to a single native category. An example is the English had‐head vowel contrast for Dutch listeners; Dutch has just one such central vowel, transcribed [E]. We examine whether the failure to discriminate in phonetic categorization implies indiscriminability in other—e.g., lexical—processing. Eyetracking experiments show that Dutch‐native listeners instructed in English to ‘‘click on the panda’’ look (significantly more than native listeners) at a pictured pencil, suggesting that pan‐ activates their lexical representation of pencil. The reverse, however, is not the case: ‘‘click on the pencil’’ does not induce looks to a panda, suggesting that pen‐ does not activate panda in the lexicon. Thus prelexically undiscriminated second‐language distinctions can nevertheless be maintained in stored lexical representations. The problem of mapping a resulting unitary input to two distinct categories in lexical representations is solved by allowing input to activate only one second‐language category. For Dutch listeners to English, this is English [E], as a result of which no vowels in the signal ever map to words containing [ae]. We suggest that the choice of category is here motivated by a more abstract, phonemic, metric of similarity.
  • Cutler, A. (1980). La leçon des lapsus. La Recherche, 11(112), 686-692.
  • Swinney, D. A., Zurif, E. B., & Cutler, A. (1980). Effects of sentential stress and word class upon comprehension in Broca’s aphasics. Brain and Language, 10, 132-144. doi:10.1016/0093-934X(80)90044-9.

    Abstract

    The roles which word class (open/closed) and sentential stress play in the sentence comprehension processes of both agrammatic (Broca's) aphasics and normal listeners were examined with a word monitoring task. Overall, normal listeners responded more quickly to stressed than to unstressed items, but showed no effect of word class. Aphasics also responded more quickly to stressed than to unstressed materials, but, unlike the normals, responded faster to open than to closed class words regardless of their stress. The results are interpreted as support for the theory that Broca's aphasics lack the functional underlying open/closed class word distinction used in word recognition by normal listeners.
  • Cutler, A. (1979). Contemporary reaction to Rudolf Meringer’s speech error research. Historiograpia Linguistica, 6, 57-76.
  • Swinney, D. A., & Cutler, A. (1979). The access and processing of idiomatic expressions. Journal of Verbal Learning an Verbal Behavior, 18, 523-534. doi:10.1016/S0022-5371(79)90284-6.

    Abstract

    Two experiments examined the nature of access, storage, and comprehension of idiomatic phrases. In both studies a Phrase Classification Task was utilized. In this, reaction times to determine whether or not word strings constituted acceptable English phrases were measured. Classification times were significantly faster to idiom than to matched control phrases. This effect held under conditions involving different categories of idioms, different transitional probabilities among words in the phrases, and different levels of awareness of the presence of idioms in the materials. The data support a Lexical Representation Hypothesis for the processing of idioms.

Share this page