Anne Cutler †

Publications

Displaying 1 - 28 of 28
  • Cutler, A., & Norris, D. (2016). Bottoms up! How top-down pitfalls ensnare speech perception researchers too. Commentary on C. Firestone & B. Scholl: Cognition does not affect perception: Evaluating the evidence for 'top-down' effects. Behavioral and Brain Sciences, e236. doi:10.1017/S0140525X15002745.

    Abstract

    Not only can the pitfalls that Firestone & Scholl (F&S) identify be generalised across multiple studies within the field of visual perception, but also they have general application outside the field wherever perceptual and cognitive processing are compared. We call attention to the widespread susceptibility of research on the perception of speech to versions of the same pitfalls.
  • Norris, D., McQueen, J. M., & Cutler, A. (2016). Prediction, Bayesian inference and feedback in speech recognition. Language, Cognition and Neuroscience, 31(1), 4-18. doi:10.1080/23273798.2015.1081703.

    Abstract

    Speech perception involves prediction, but how is that prediction implemented? In cognitive models prediction has often been taken to imply that there is feedback of activation from lexical to pre-lexical processes as implemented in interactive-activation models (IAMs). We show that simple activation feedback does not actually improve speech recognition. However, other forms of feedback can be beneficial. In particular, feedback can enable the listener to adapt to changing input, and can potentially help the listener to recognise unusual input, or recognise speech in the presence of competing sounds. The common feature of these helpful forms of feedback is that they are all ways of optimising the performance of speech recognition using Bayesian inference. That is, listeners make predictions about speech because speech recognition is optimal in the sense captured in Bayesian models.
  • Cutler, A. (2015). Lexical stress in English pronunciation. In M. Reed, & J. M. Levis (Eds.), The Handbook of English Pronunciation (pp. 106-124). Chichester: Wiley.
  • Cutler, A. (2015). Representation of second language phonology. Applied Psycholinguistics, 36(1), 115-128. doi:10.1017/S0142716414000459.

    Abstract

    Orthographies encode phonological information only at the level of words (chiefly, the information encoded concerns phonetic segments; in some cases, tonal information or default stress may be encoded). Of primary interest to second language (L2) learners is whether orthography can assist in clarifying L2 phonological distinctions that are particularly difficult to perceive (e.g., where one native-language phonemic category captures two L2 categories). A review of spoken-word recognition evidence suggests that orthographic information can install knowledge of such a distinction in lexical representations but that this does not affect learners’ ability to perceive the phonemic distinction in speech. Words containing the difficult phonemes become even harder for L2 listeners to recognize, because perception maps less accurately to lexical content.
  • Ernestus, M., & Cutler, A. (2015). BALDEY: A database of auditory lexical decisions. Quarterly Journal of Experimental Psychology, 68, 1469-1488. doi:10.1080/17470218.2014.984730.

    Abstract

    In an auditory lexical decision experiment, 5,541 spoken content words and pseudo-words were presented to 20 native speakers of Dutch. The words vary in phonological makeup and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudo-words were matched in these respects to the real words. The BALDEY data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbors, and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles and frequency ratings by 70 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.
  • Cutler, A. (2001). De baby in je hoofd: luisteren naar eigen en andermans taal [Speech at the Catholic University's 78th Dies Natalis]. Nijmegen, The Netherlands: Nijmegen University Press.
  • Cutler, A. (2001). Entries on: Acquisition of language by non-human primates; bilingualism; compound (linguistic); development of language-specific phonology; gender (linguistic); grammar; infant speech perception; language; lexicon; morphology; motor theory of speech perception; perception of second languages; phoneme; phonological store; phonology; prosody; sign language; slips of the tongue; speech perception; speech production; stress (linguistic); syntax; word recognition; words. In P. Winn (Ed.), Dictionary of biological psychology. London: Routledge.
  • Cutler, A. (2001). Listening to a second language through the ears of a first. Interpreting, 5, 1-23.
  • Cutler, A., McQueen, J. M., Norris, D., & Somejuan, A. (2001). The roll of the silly ball. In E. Dupoux (Ed.), Language, brain and cognitive development: Essays in honor of Jacques Mehler (pp. 181-194). Cambridge, MA: MIT Press.
  • Cutler, A., & Van Donselaar, W. (2001). Voornaam is not a homophone: Lexical prosody and lexical access in Dutch. Language and Speech, 44, 171-195. doi:10.1177/00238309010440020301.

    Abstract

    Four experiments examined Dutch listeners’ use of suprasegmental information in spoken-word recognition. Isolated syllables excised from minimal stress pairs such as VOORnaam/voorNAAM could be reliably assigned to their source words. In lexical decision, no priming was observed from one member of minimal stress pairs to the other, suggesting that the pairs’ segmental ambiguity was removed by suprasegmental information.Words embedded in nonsense strings were harder to detect if the nonsense string itself formed the beginning of a competing word, but a suprasegmental mismatch to the competing word significantly reduced this inhibition. The same nonsense strings facilitated recognition of the longer words of which they constituted the beginning, butagain the facilitation was significantly reduced by suprasegmental mismatch. Together these results indicate that Dutch listeners effectively exploit suprasegmental cues in recognizing spoken words. Nonetheless, suprasegmental mismatch appears to be somewhat less effective in constraining activation than segmental mismatch.
  • McQueen, J. M., & Cutler, A. (Eds.). (2001). Spoken word access processes. Hove, UK: Psychology Press.
  • McQueen, J. M., & Cutler, A. (2001). Spoken word access processes: An introduction. Language and Cognitive Processes, 16, 469-490. doi:10.1080/01690960143000209.

    Abstract

    We introduce the papers in this special issue by summarising the current major issues in spoken word recognition. We argue that a full understanding of the process of lexical access during speech comprehension will depend on resolving several key representational issues: what is the form of the representations used for lexical access; how is phonological information coded in the mental lexicon; and how is the morphological and semantic information about each word stored? We then discuss a number of distinct access processes: competition between lexical hypotheses; the computation of goodness-of-fit between the signal and stored lexical knowledge; segmentation of continuous speech; whether the lexicon influences prelexical processing through feedback; and the relationship of form-based processing to the processes responsible for deriving an interpretation of a complete utterance. We conclude that further progress may well be made by swapping ideas among the different sub-domains of the discipline.
  • McQueen, J. M., Otake, T., & Cutler, A. (2001). Rhythmic cues and possible-word constraints in Japanese speech segmentation. Journal of Memory and Language, 45, 103-132. doi:10.1006/jmla.2000.2763.

    Abstract

    In two word-spotting experiments, Japanese listeners detected Japanese words faster in vowel contexts (e.g., agura, to sit cross-legged, in oagura) than in consonant contexts (e.g., tagura). In the same experiments, however, listeners spotted words in vowel contexts (e.g., saru, monkey, in sarua) no faster than in moraic nasal contexts (e.g., saruN). In a third word-spotting experiment, words like uni, sea urchin, followed contexts consisting of a consonant-consonant-vowel mora (e.g., gya) plus either a moraic nasal (gyaNuni), a vowel (gyaouni) or a consonant (gyabuni). Listeners spotted words as easily in the first as in the second context (where in each case the target words were aligned with mora boundaries), but found it almost impossible to spot words in the third (where there was a single consonant, such as the [b] in gyabuni, between the beginning of the word and the nearest preceding mora boundary). Three control experiments confirmed that these effects reflected the relative ease of segmentation of the words from their contexts.We argue that the listeners showed sensitivity to the viability of sound sequences as possible Japanese words in the way that they parsed the speech into words. Since single consonants are not possible Japanese words, the listeners avoided lexical parses including single consonants and thus had difficulty recognizing words in the consonant contexts. Even though moraic nasals are also impossible words, they were not difficult segmentation contexts because, as with the vowel contexts, the mora boundaries between the contexts and the target words signaled likely word boundaries. Moraic rhythm appears to provide Japanese listeners with important segmentation cues.
  • Norris, D., McQueen, J. M., Cutler, A., Butterfield, S., & Kearns, R. (2001). Language-universal constraints on speech segmentation. Language and Cognitive Processes, 16, 637-660. doi:10.1080/01690960143000119.

    Abstract

    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC) is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavours parses which leave an impossible residue between the end of a candidate word and any likely location of a word boundary, as cued in the speech signal. The experiments examined cases where the residue was either a CVC syllable with a schwa, or a CV syllable with a lax vowel. Although neither of these syllable contexts is a possible lexical word in English, word-spotting in both contexts was easier than in a context consisting of a single consonant. Two control lexical-decision experiments showed that the word-spotting results reflected the relative segmentation difficulty of the words in different contexts. The PWC appears to be language-universal rather than language-specific.
  • Soto-Faraco, S., Sebastian-Galles, N., & Cutler, A. (2001). Segmental and suprasegmental mismatch in lexical access. Journal of Memory and Language, 45, 412-432. doi:10.1006/jmla.2000.2783.

    Abstract

    Four cross-modal priming experiments in Spanish addressed the role of suprasegmental and segmental information in the activation of spoken words. Listeners heard neutral sentences ending with word fragments (e.g., princi-) and made lexical decisions on letter strings presented at fragment offset. Responses were compared for fragment primes that fully matched the spoken form of the initial portion of target words, versus primes that mismatched in a single element (stress pattern; one vowel; one consonant), versus control primes. Fully matching primes always facilitated lexical decision responses, in comparison to the control condition, while mismatching primes always produced inhibition. The respective strength of the contribution of stress, vowel, and consonant (one feature mismatch or more) information did not differ statistically. The results support a model of spoken-word recognition involving automatic activation of word forms and competition between activated words, in which the activation process is sensitive to all acoustic information relevant to the language’s phonology.
  • Warner, N., Jongman, A., Cutler, A., & Mücke, D. (2001). The phonological status of Dutch epenthetic schwa. Phonology, 18, 387-420. doi:10.1017/S0952675701004213.

    Abstract

    In this paper, we use articulatory measures to determine whether Dutch schwa epenthesis is an abstract phonological process or a concrete phonetic process depending on articulatory timing. We examine tongue position during /l/ before underlying schwa and epenthetic schwa and in coda position. We find greater tip raising before both types of schwa, indicating light /l/ before schwa and dark /l/ in coda position. We argue that the ability of epenthetic schwa to condition the /l/ alternation shows that Dutch schwa epenthesis is an abstract phonological process involving insertion of some unit, and cannot be accounted for within Articulatory Phonology.
  • Cutler, A. (1989). Auditory lexical access: Where do we start? In W. Marslen-Wilson (Ed.), Lexical representation and process (pp. 342-356). Cambridge, MA: MIT Press.

    Abstract

    The lexicon, considered as a component of the process of recognizing speech, is a device that accepts a sound image as input and outputs meaning. Lexical access is the process of formulating an appropriate input and mapping it onto an entry in the lexicon's store of sound images matched with their meanings. This chapter addresses the problems of auditory lexical access from continuous speech. The central argument to be proposed is that utterance prosody plays a crucial role in the access process. Continuous listening faces problems that are not present in visual recognition (reading) or in noncontinuous recognition (understanding isolated words). Aspects of utterance prosody offer a solution to these particular problems.
  • Cutler, A., Howard, D., & Patterson, K. E. (1989). Misplaced stress on prosody: A reply to Black and Byng. Cognitive Neuropsychology, 6, 67-83.

    Abstract

    The recent claim by Black and Byng (1986) that lexical access in reading is subject to prosodic constraints is examined and found to be unsupported. The evidence from impaired reading which Black and Byng report is based on poorly controlled stimulus materials and is inadequately analysed and reported. An alternative explanation of their findings is proposed, and new data are reported for which this alternative explanation can account but their model cannot. Finally, their proposal is shown to be theoretically unmotivated and in conflict with evidence from normal reading.
  • Cutler, A. (1989). Straw modules [Commentary/Massaro: Speech perception]. Behavioral and Brain Sciences, 12, 760-762.
  • Cutler, A. (1989). The new Victorians. New Scientist, (1663), 66.
  • Patterson, R. D., & Cutler, A. (1989). Auditory preprocessing and recognition of speech. In A. Baddeley, & N. Bernsen (Eds.), Research directions in cognitive science: A european perspective: Vol. 1. Cognitive psychology (pp. 23-60). London: Erlbaum.
  • Smith, M. R., Cutler, A., Butterfield, S., & Nimmo-Smith, I. (1989). The perception of rhythm and word boundaries in noise-masked speech. Journal of Speech and Hearing Research, 32, 912-920.

    Abstract

    The present experiment tested the suggestion that human listeners may exploit durational information in speech to parse continuous utterances into words. Listeners were presented with six-syllable unpredictable utterances under noise-masking, and were required to judge between alternative word strings as to which best matched the rhythm of the masked utterances. For each utterance there were four alternative strings: (a) an exact rhythmic and word boundary match, (b) a rhythmic mismatch, and (c) two utterances with the same rhythm as the masked utterance, but different word boundary locations. Listeners were clearly able to perceive the rhythm of the masked utterances: The rhythmic mismatch was chosen significantly less often than any other alternative. Within the three rhythmically matched alternatives, the exact match was chosen significantly more often than either word boundary mismatch. Thus, listeners both perceived speech rhythm and used durational cues effectively to locate the position of word boundaries.
  • Cutler, A. (1984). Stress and accent in language production and understanding. In D. Gibbon, & H. Richter (Eds.), Intonation, accent and rhythm: Studies in discourse phonology (pp. 77-90). Berlin: de Gruyter.
  • Cutler, A., & Clifton Jr., C. (1984). The use of prosodic information in word recognition. In H. Bouma, & D. Bouwhuis (Eds.), Attention and Performance X: Control of Language Processes (pp. 183-196). Hillsdale, NJ: Erlbaum.
  • Cutler, A., & Clifton, Jr., C. (1984). The use of prosodic information in word recognition. In H. Bouma, & D. G. Bouwhuis (Eds.), Attention and performance X: Control of language processes (pp. 183-196). London: Erlbaum.

    Abstract

    In languages with variable stress placement, lexical stress patterns can convey information about word identity. The experiments reported here address the question of whether lexical stress information can be used in word recognition. The results allow the following conclusions: 1. Prior information as to the number of syllables and lexical stress patterns of words and nonwords does not facilitate lexical decision responses (Experiment 1). 2. The strong correspondences between grammatical category membership and stress pattern in bisyllabic English words (strong-weak stress being associated primarily with nouns, weak-strong with verbs) are not exploited in the recognition of isolated words (Experiment 2). 3. When a change in lexical stress also involves a change in vowel quality, i.e., a segmental as well as a suprasegmental alteration, effects on word recognition are greater when no segmental correlates of suprasegmental changes are involved (Experiments 2 and 3). 4. Despite the above finding, when all other factors are controlled, lexical stress information per se can indeed be shown to play a part in word-recognition process (Experiment 3).
  • Scott, D. R., & Cutler, A. (1984). Segmental phonology and the perception of syntactic structure. Journal of Verbal Learning and Verbal Behavior, 23, 450-466. Retrieved from http://www.sciencedirect.com/science//journal/00225371.

    Abstract

    Recent research in speech production has shown that syntactic structure is reflected in segmental phonology--the application of certain phonological rules of English (e.g., palatalization and alveolar flapping) is inhibited across phrase boundaries. We examined whether such segmental effects can be used in speech perception as cues to syntactic structure, and the relation between the use of these segmental features as syntactic markers in production and perception. Speakers of American English (a dialect in which the above segmental effects occur) could indeed use the segmental cues in syntax perception; speakers of British English (in which the effects do not occur) were unable to make use of them, while speakers of British English who were long-term residents of the United States showed intermediate performance.
  • Cutler, A., & Foss, D. (1977). On the role of sentence stress in sentence processing. Language and Speech, 20, 1-10.
  • Fay, D., & Cutler, A. (1977). Malapropisms and the structure of the mental lexicon. Linguistic Inquiry, 8, 505-520. Retrieved from http://www.jstor.org/stable/4177997.

Share this page