Anne Cutler †

Publications

Displaying 1 - 15 of 15
  • Cutler, A., Sebastian-Galles, N., Soler-Vilageliu, O., & Van Ooijen, B. (2000). Constraints of vowels and consonants on lexical selection: Cross-linguistic comparisons. Memory & Cognition, 28, 746-755.

    Abstract

    Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.
  • Cutler, A., & Van de Weijer, J. (2000). De ontdekking van de eerste woorden. Stem-, Spraak- en Taalpathologie, 9, 245-259.

    Abstract

    Spraak is continu, er zijn geen betrouwbare signalen waardoor de luisteraar weet waar het ene woord eindigt en het volgende begint. Voor volwassen luisteraars is het segmenteren van gesproken taal in afzonderlijke woorden dus niet onproblematisch, maar voor een kind dat nog geen woordenschat bezit, vormt de continuïteit van spraak een nog grotere uitdaging. Desalniettemin produceren de meeste kinderen hun eerste herkenbare woorden rond het begin van het tweede levensjaar. Aan deze vroege spraakproducties gaat een formidabele perceptuele prestatie vooraf. Tijdens het eerste levensjaar - met name gedurende de tweede helft - ontwikkelt de spraakperceptie zich van een algemeen fonetisch discriminatievermogen tot een selectieve gevoeligheid voor de fonologische contrasten die in de moedertaal voorkomen. Recent onderzoek heeft verder aangetoond dat kinderen, lang voordat ze ook maar een enkel woord kunnen zeggen, in staat zijn woorden die kenmerkend zijn voor hun moedertaal te onderscheiden van woorden die dat niet zijn. Bovendien kunnen ze woorden die eerst in isolatie werden aangeboden herkennen in een continue spraakcontext. Het dagelijkse taalaanbod aan een kind van deze leeftijd maakt het in zekere zin niet gemakkelijk, bijvoorbeeld doordat de meeste woorden niet in isolatie voorkomen. Toch wordt het kind ook wel houvast geboden, onder andere doordat het woordgebruik beperkt is.
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Norris, D., McQueen, J. M., & Cutler, A. (2000). Feedback on feedback on feedback: It’s feedforward. (Response to commentators). Behavioral and Brain Sciences, 23, 352-370.

    Abstract

    The central thesis of the target article was that feedback is never necessary in spoken word recognition. The commentaries present no new data and no new theoretical arguments which lead us to revise this position. In this response we begin by clarifying some terminological issues which have lead to a number of significant misunderstandings. We provide some new arguments to support our case that the feedforward model Merge is indeed more parsimonious than the interactive alternatives, and that it provides a more convincing account of the data than alternative models. Finally, we extend the arguments to deal with new issues raised by the commentators such as infant speech perception and neural architecture.
  • Norris, D., McQueen, J. M., & Cutler, A. (2000). Merging information in speech recognition: Feedback is never necessary. Behavioral and Brain Sciences, 23, 299-325.

    Abstract

    Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. The modular Race model (Cutler & Norris 1979) is likewise challenged by some recent results, however. We therefore present a new modular model of phonemic decision making, the Merge model. In Merge, information flows from prelexical processes to the lexicon without feedback. Because phonemic decisions are based on the merging of prelexical and lexical information, Merge correctly predicts lexical involvement in phonemic decisions in both words and nonwords. Computer simulations show how Merge is able to account for the data through a process of competition between lexical hypotheses. We discuss the issue of feedback in other areas of language processing and conclude that modular models are particularly well suited to the problems and constraints of speech recognition.
  • Boland, J. E., & Cutler, A. (1995). Interaction with autonomy: Defining multiple output models in psycholinguistic theory. Working Papers in Linguistic, 45, 1-10. Retrieved from http://hdl.handle.net/2066/15768.

    Abstract

    There are currently a number of psycholinguistic models in which processing at a particular level of representation is characterized by the generation of multiple outputs, with resolution involving the use of information from higher levels of processing. Surprisingly, models with this architecture have been characterized as autonomous within the domain of word recognition and as interactive within the domain of sentence processing. We suggest that the apparent internal confusion is not, as might be assumed, due to fundamental differences between lexical and syntactic processing. Rather, we believe that the labels in each domain were chosen in order to obtain maximal contrast between a new model and the model or models that were currently dominating the field.
  • Boland, J. E., & Cutler, A. (1995). Interaction with autonomy: Multiple Output models and the inadequacy of the Great Divide. Cognition, 58, 309-320. doi:10.1016/0010-0277(95)00684-2.

    Abstract

    There are currently a number of psycholinguistic models in which processing at a particular level of representation is characterized by the generation of multiple outputs, with resolution - but not generation - involving the use of information from higher levels of processing. Surprisingly, models with this architecture have been characterized as autonomous within the domain of word recognition but as interactive within the domain of sentence processing. We suggest that the apparent confusion is not, as might be assumed, due to fundamental differences between lexical and syntactic processing. Rather, we believe that the labels in each domain were chosen in order to obtain maximal contrast between a new model and the model or models that were currently dominating the field. The contradiction serves to highlight the inadequacy of a simple autonomy/interaction dichotomy for characterizing the architectures of current processing models.
  • Fear, B. D., Cutler, A., & Butterfield, S. (1995). The strong/weak syllable distinction in English. Journal of the Acoustical Society of America, 97, 1893-1904. doi:10.1121/1.412063.

    Abstract

    Strong and weak syllables in English can be distinguished on the basis of vowel quality, of stress, or of both factors. Critical for deciding between these factors are syllables containing unstressed unreduced vowels, such as the first syllable of automata. In this study 12 speakers produced sentences containing matched sets of words with initial vowels ranging from stressed to reduced, at normal and at fast speech rates. Measurements of the duration, intensity, F0, and spectral characteristics of the word-initial vowels showed that unstressed unreduced vowels differed significantly from both stressed and reduced vowels. This result held true across speaker sex and dialect. The vowels produced by one speaker were then cross-spliced across the words within each set, and the resulting words' acceptability was rated by listeners. In general, cross-spliced words were only rated significantly less acceptable than unspliced words when reduced vowels interchanged with any other vowel. Correlations between rated acceptability and acoustic characteristics of the cross-spliced words demonstrated that listeners were attending to duration, intensity, and spectral characteristics. Together these results suggest that unstressed unreduced vowels in English pattern differently from both stressed and reduced vowels, so that no acoustic support for a binary categorical distinction exists; nevertheless, listeners make such a distinction, grouping unstressed unreduced vowels by preference with stressed vowels
  • McQueen, J. M., Cutler, A., Briscoe, T., & Norris, D. (1995). Models of continuous speech recognition and the contents of the vocabulary. Language and Cognitive Processes, 10, 309-331. doi:10.1080/01690969508407098.

    Abstract

    Several models of spoken word recognition postulate that recognition is achieved via a process of competition between lexical hypotheses. Competition not only provides a mechanism for isolated word recognition, it also assists in continuous speech recognition, since it offers a means of segmenting continuous input into individual words. We present statistics on the pattern of occurrence of words embedded in the polysyllabic words of the English vocabulary, showing that an overwhelming majority (84%) of polysyllables have shorter words embedded within them. Positional analyses show that these embeddings are most common at the onsets of the longer word. Although both phonological and syntactic constraints could rule out some embedded words, they do not remove the problem. Lexical competition provides a means of dealing with lexical embedding. It is also supported by a growing body of experimental evidence. We present results which indicate that competition operates both between word candidates that begin at the same point in the input and candidates that begin at different points (McQueen, Norris, & Cutler, 1994, Noms, McQueen, & Cutler, in press). We conclude that lexical competition is an essential component in models of continuous speech recognition.
  • Norris, D., McQueen, J. M., & Cutler, A. (1995). Competition and segmentation in spoken word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 1209-1228.

    Abstract

    Spoken utterances contain few reliable cues to word boundaries, but listeners nonetheless experience little difficulty identifying words in continuous speech. The authors present data and simulations that suggest that this ability is best accounted for by a model of spoken-word recognition combining competition between alternative lexical candidates and sensitivity to prosodic structure. In a word-spotting experiment, stress pattern effects emerged most clearly when there were many competing lexical candidates for part of the input. Thus, competition between simultaneously active word candidates can modulate the size of prosodic effects, which suggests that spoken-word recognition must be sensitive both to prosodic structure and to the effects of competition. A version of the Shortlist model ( D. G. Norris, 1994b) incorporating the Metrical Segmentation Strategy ( A. Cutler & D. Norris, 1988) accurately simulates the results using a lexicon of more than 25,000 words.
  • Cutler, A., & Butterfield, S. (1990). Durational cues to word boundaries in clear speech. Speech Communication, 9, 485-495.

    Abstract

    One of a listener’s major tasks in understanding continuous speech in segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately clear speech. We found that speakers do indeed attempt to makr word boundaries; moreover, they differentiate between word boundaries in a way which suggest they are sensitive to listener needs. Application of heuristic segmentation strategies makes word boundaries before strong syllables easiest for listeners to perceive; but under difficult listening conditions speakers pay more attention to marking word boundaries before weak syllables, i.e. they mark those boundaries which are otherwise particularly hard to perceive.
  • Cutler, A., McQueen, J. M., & Robinson, K. (1990). Elizabeth and John: Sound patterns of men’s and women’s names. Journal of Linguistics, 26, 471-482. doi:10.1017/S0022226700014754.
  • Cutler, A., & Scott, D. R. (1990). Speaker sex and perceived apportionment of talk. Applied Psycholinguistics, 11, 253-272. doi:10.1017/S0142716400008882.

    Abstract

    It is a widely held belief that women talk more than men; but experimental evidence has suggested that this belief is mistaken. The present study investigated whether listener bias contributes to this mistake. Dialogues were recorded in mixed-sex and single-sex versions, and male and female listeners judged the proportions of talk contributed to the dialogues by each participant. Female contributions to mixed-sex dialogues were rated as greater than male contributions by both male and female listeners. Female contributions were more likely to be overestimated when they were speaking a dialogue part perceived as probably female than when they were speaking a dialogue part perceived as probably male. It is suggested that the misestimates are due to a complex of factors that may involve both perceptual effects such as misjudgment of rates of speech and sociological effects such as attitudes to social roles and perception of power relations.
  • Cutler, A. (1980). La leçon des lapsus. La Recherche, 11(112), 686-692.
  • Swinney, D. A., Zurif, E. B., & Cutler, A. (1980). Effects of sentential stress and word class upon comprehension in Broca’s aphasics. Brain and Language, 10, 132-144. doi:10.1016/0093-934X(80)90044-9.

    Abstract

    The roles which word class (open/closed) and sentential stress play in the sentence comprehension processes of both agrammatic (Broca's) aphasics and normal listeners were examined with a word monitoring task. Overall, normal listeners responded more quickly to stressed than to unstressed items, but showed no effect of word class. Aphasics also responded more quickly to stressed than to unstressed materials, but, unlike the normals, responded faster to open than to closed class words regardless of their stress. The results are interpreted as support for the theory that Broca's aphasics lack the functional underlying open/closed class word distinction used in word recognition by normal listeners.

Share this page