Anne Cutler †

Presentations

Displaying 1 - 32 of 32
  • Cutler, A. (2019). Language-specificity in processing, and its origins [Keynote]. Talk presented at the 4th Workshop on Infant Language Development (WILD 2019). Postdam, Germany. 2019-06-13 - 2019-06-15.
  • Cutler, A., Baldacchino, J., Wagner, A., & Peter, V. (2016). Language-specificity in early cortical responses to speech sounds. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    The continuity of speech articulation ensures that in all languages, spoken sounds influence one another. Thus there are potentially cues to a sound’s identity in the realisation of surrounding sounds. Listeners make use of such coarticulatory cues – but not always. It has long been known (Harris, Lang. Sp., 1958) that English-speakers use this coarticulation to identify [f] but not [s]. The reason is that place of articulation cues can distinguish [f] from its very close perceptual competitor [θ] (deaf/death), while [s] has no such perceptual competitor and hence less need of such disambiguation. In languages with [f] but no [θ] (e.g., Dutch, Polish), listeners do not use coarticulation to identify [f], whereas listeners do use coarticulation to identify [s] where [s] has close competitors (Polish; Wagner et al., JASA, 2006). The patterning of coarticulation cue use is thus language-specific. In those studies, listeners’ use of coarticulatory cues was revealed by comparing responses to the same sounds in matching versus mismatching phonetic context (e.g., in afa, asa either as originally recorded, or with the consonants cross-spliced); sensitivity to this difference signals attention to coarticulation. We used this same method to assess whether language-specificity could be observed in the early cortical responses to speech, by measuring auditory evoked potentials in response to change in an ongoing sound (Acoustic Change Complex [ACC]; Martin & Boothroyd, JASA, 2000). 18 undergraduate native speakers of Australian English (11 females) heard, while watching silent video, 900 bisyllables (150 repetitions each of afa and asa in original, identity-spliced and cross-spliced realisation, where identity-spliced afa has initial [a] from another utterance of afa, cross-spliced afa has [a] from asa). If the ACC exhibits the language-specific differential response to [f] versus [s], we predict a significant difference across stimulus types (cross-spliced versus the other two stimulus types) for afa but not for asa. Listeners’ EEG was recorded (BioSemi, 64 channels), filtered between 0.1-30 Hz, divided into epochs from -100 to +1000 ms from token onset, and the epochs averaged separately for each bisyllable and stimulus type. The ACC amplitude was calculated from the grand averaged waveform across listeners as the difference in amplitude between the N1 and P2 peaks at the Fz electrode site; these differences were analysed in Bonferroni-corrected planned comparisons across the three stimulus types (unspliced, identity-spliced, cross-spliced) for each of afa and asa. For asa, the planned comparisons showed no differences at all between stimulus types. For afa, in contrast, the comparison between unspliced and cross-spliced stimulus types revealed that cross-spliced tokens generated a significantly smaller ACC: F(1,17)=5.98, p<.05. The amplitudes from the unspliced and identity-spliced afa stimuli however did not significantly differ. These findings indicate that English-speaking listeners’ coarticulation usage patterns – sensitivity to cues in a preceding vowel in the case of [f], insensitivity in the case of [s] – can be detected in the ACC, suggesting that native language experience tailors even the initial cortical responses to speech sounds.
  • Ullas, S., Eisner, F., Cutler, A., & Formisano, E. (2016). Lexical and lip-reading information as sources of phonemic boundary recalibration. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Listeners can flexibly adjust boundaries between phonemes when exposed to biased information. Ambiguous sounds are particularly susceptible to being interpreted as certain phonemes depending on the surrounding context, so that if they are embedded into words, the sound can be perceived as the phoneme that would naturally occur in the word. Similarly, ambiguous sounds presented simultaneously with videos of a speaker’s lip movements can also affect the listener’s perception, where the ambiguous sound can be interpreted as the phoneme corresponding with the lip movements of the speaker. These two forms of phonetic boundary recalibration have been demonstrated to be utilized by listeners to adapt in contexts where speech is unclear, due to noise or exposure to a new accent. The current study was designed to directly compare phonemic recalibration effects based on lexical and lip-reading exposures. A specific goal was to investigate how easily listeners are able to follow alternating lexical and lip-reading exposures, in order to determine the most optimal way in which listeners can switch between the two. In the experiment, participants (N=28)were exposed to blocked presentations of words or videos embedded with an individually determined, ambiguous token halfway in between /oop/ or /oot/. In lexical blocks, the stimuli consisted of audio recordings of Dutch words that ended in either /oop/ or /oot/, with the naturally occurring ending replaced with the ambiguous token. In lip-reading exposure blocks, the stimuli were made up of video recordings of the same native Dutch speaker pronouncing pseudo-words that visually appeared to end in /oop/ or /oot/, but the audio of the ending was also replaced with the same ambiguous token. Two types of presentations were administered to two groups of 14, with one version switching the modality of exposure after every block, and the other every four blocks. Following each exposure block, a 6 item post-test was presented, where participants heard the ambiguous token and its two neighbors from a 10-step continuum in isolation, each presented twice, and were asked to report if each sound resembled /oop/ or /oot/. Results from a mixed-factor ANOVA determined that subjects could flexibly adjust phoneme boundaries, as there was a main effect of the phoneme being biased, such that there was a greater proportion of /oot/ responses (pooled across all post-test items) following /oot/ bias blocks than following /oop/ bias blocks, F(1,28) = 15.828, p<0.01. There was also a main effect of exposure type, comparing lexical and lip-reading exposures, F(1,28) = 4.405, p<0.05 which indicated that recalibration strength was stronger following lip-reading exposure than lexical exposure. Additionally, a significant interaction between exposure type and phoneme bias was revealed, F(1,28) = 6.475, p<0.05, showing that the magnitude of the difference between p and t-biased blocks was also greater with lip-reading exposure. No significant differences were found between the two presentation types, neither for exposure type nor for phoneme bias. These results indicate that phoneme boundaries can be influenced by alternating lexical and lip-reading sources of information, and that lip-reading information is especially effective accomplishing this.
  • Cutler, A. (2015). Big issues in speech perception: Abstraction and nativeness [Plenary Lecture]. Talk presented at the 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow. 2015-08-10 - 2015-08-14.
  • Cutler, A. (2012). Eén taalpsychologie is geen taalpsychologie (Part II) [Afscheidsrede]. Talk presented at the Radboud Universiteit Nijmegen. the Netherlands. 2012-09-20.

    Abstract

    Het eerste deel van de titel van de afscheidsrede van Anne Cutler vormde ook de titel van haar oratie aan de Radboud Universiteit Nijmegen, in 1996. De strekking van het toenmalige betoog was dat de psycholinguïstiek vergelijkend te werk moet gaan; alleen door de menselijke verwerking van verschillende talen in kaart te brengen, en de verwerking van de ene taal met die van de andere taal te vergelijken, is de menselijke verwerking van taal in het algemeen te doorgronden. Inmiddels is zulke taalvergelijking een wijdverbreide psycholinguïstische methode. Vergelijking tussen het gebruik van twee talen tegenover die van één taal is in dit vak ook goed vertegenwoordigd. En sinds kort staat ook de vergelijking tussen de taalverwerking van de ene tegenover de andere individuele taalgebruiker op de wetenschappelijke agenda. In haar afscheidsrede pleit Cutler voor nog een vierde vorm van vergelijking: nieuwe onderzoekstechnieken bieden mogelijkheden om bij dezelfde gebruikers de verwerking van de ene (bijvoorbeeld de perfect beheerste) taal tegenover de andere (bijvoorbeeld een bijna vergeten) taal te vergelijken. Cutlers afscheidsrede is dus geen einde: eerder een nieuw begin. Biografie Anne Cutler (Melbourne, Australia, 1945) neemt met deze rede afscheid als hoogleraar Vergelijkende psycholinguïstiek aan de Radboud Universiteit Nijmegen. Vanaf 1993 was zij tevens een van de directeuren van het Max Planck Institute for Psycholinguistics in Nijmegen. In 1999 ontving Cutler een Spinozapremie van de Nederlandse organisatie voor Wetenschappelijk Onderzoek. Met die premie maakte zij de totstandkoming van het baby research center, verbonden aan de Radboud Universiteit, mogelijk.
  • Cutler, A. (2011). Different languages make different listeners [R D Wright Lecture]. Talk presented at the University of Melbourne. Melbourne, Australia. 2011-08-02.

    Abstract

    Babies are born with no predisposition to a particular language; they acquire the language they hear. In other words, the processes in the baby brain must be language-universal. Adults listen extremely efficiently to speech in their native language, drawing on processes that would work very inefficiently with other languages. In other words, speech processing in the adult brain is language specific. What happens in between? That’s what this lecture is about.
  • Cutler, A. (2011). The induction of native listening. Talk presented at NET-Symposium 2011. Utrecht, Netherlands. 2011-03-18.

    Abstract

    Listening to speech is a process that differs across languages, because it is exquisitely tailored to the structure of the native language, and the structure itself differs across languages. Languages effectively train their listeners to process them efficiently. The training begins from the earliest days of speech perception, in the first year of life. Even minor structural differences between closely related languages can, in due course, lead to significant processing differences by adult listeners.
  • Cutler, A. (2010). Alfa/gamma careers. Talk presented at Get inspired! Meeting for aspiring PhDs and postdocs. Radboud University Nijmegen. 2010-06-07.
  • Cutler, A., & Broersma, M. (2010). Competition dynamics in second language listening. Talk presented at Psycholinguistic approaches to speech recognition in adverse conditions. University of Bristol, UK. 2010-03-09.

    Abstract

    Listening, in any language, involves processing the phone¬mic content of spoken input in order to identify the words of which utterances are made up. Models of spoken-word recognition agree that deriving the correct word sequence from speech input is a process in which multiple candidate words are considered in parallel, and in which words com¬pete with one another where they separately lay claim on the same input. In ideal speech situations, the phonemic sequence of each spoken word would be fully instantiated in the speech signal, listeners would correctly identify every one of these uttered phonemes, and listeners stored lexical representations would exactly match with the form in which the words are encoun¬tered in speech. In the real world, with which this workshop is concerned, none of these propositions is guaranteed to hold. This presentation addresses the particular case of lis¬tening in a second language (L2), and the potential effects on the word recognition process of misidentifying a pho¬neme in the input. The misidentification of L2 phonemes is a notorious source of listening difficulty. Contrary to many L2 users’ intuitions, however, the most serious problem is not lexical indistinguishability (‘write’ heard as ‘light’, etc.). There are two reasons for this: Spurious homophones such as ‘light/write’ would contribute only a trivial increase to the signif¬icant number of real homophones in the lexicon, and the number of fully indistinguishable words is massively out¬weighed by the set of temporarily indistinguishable words, which nevertheless add to the amount of lexical competition that a L2 listener experiences. The relevant lexical statistics will be presented in support of this claim. Exacerbating the increase in lexical competition in L2 is the curious situation whereby an L2 user’s lexical represen¬tations can encode phonemic distinctions which are not reli¬ably perceived by the same person in spoken input. This com¬bination of lexical accuracy with perceptual inaccuracy, now repeatedly established in listening experiments, is fatal in the competition situation. As will be illustrated in simulations with a computationally implemented spoken-word recog¬nition model, the combination inevitably results in compe¬tition which is more persistent than the competition from accurately perceived words. Word recognition experiments with L2 listeners confirm that this extra-persistent compe¬tition is indeed observed. The real world of the second-lan¬guage listener is more competition-prone than the world of the native-language listener, ideal or real.
  • Cutler, A., El Aissati, A., Hanulikova, A., & McQueen, J. M. (2010). Effects on speech parsing of vowelless words in the phonology. Talk presented at 12th Conference on Laboratory Phonology. University of New Mexico in Albuquerque, NM. 2010-07-08 - 2010-07-10.
  • Cutler, A. (2010). As soon as you find words, you can start using language. Talk presented at Talking about language: A one-day workshop to mark the launch of Wortschatzinsel. Georg-August-University Göttingen. 2010-10-22.

    Abstract

    The speech infants hear, in the first year of life before they themselves begin to speak, is mainly multi-word utterances, without clear pauses between the words. Thus to construct the initial vocabulary they need to begin speaking themselves, infants need to learn how to segment words from speech. Indeed, there is evidence that segmentation ability in the first year of life correlates positively with vocabulary size at two years. This evidence has come principally from studies of segmentation using the behavioralheadturn-preference procedure. If infants first hear words in isolation, and then recognise these familiarised words when they occur later in sentences, they have shown that they can segment individual words out of multi-word utterances. An electrophysiological analogue to this behavioralprocedure, measuring Event-Related Potentials (ERPs) was however later developed by Kooijman. This allowed, for the first time, an online assessment of infants’ word segmentation. Kooijmantested seven-and ten-month-olds; the ten-month-olds showed a clear recognition response (in the form of a left negativity) for familiarised words heard later in sentences, relative to unfamiliar words. This showed that the ten-month-olds indeed had the ability to segment speech. Such segmentation behaviorwas not, however, consistently present in the seven-month-olds. We here report three studies relating this ERP measure of speech segmentation to later language development. First, we divided the seven-month-old infants tested by Kooijmaninto two sub-groups: those with an ERP effect similar to the 10-month-olds’ pattern, and those without such an effect. When re-tested at three years of age, the former group displayed significantly higher language scores than the latter group. Second, we examined whether ten-month-olds can recognize words that have previously been presented just once, within an utterance. Recognition was again indicated by a left-frontal negativity, and presence and size of this response proved in later testing to be related to vocabulary size, both at 12 and at 24 months. Third, we conducted a study in which both familiarization and test phases consisted of continuous sentences. Again we observed the same recognition response in the infant brain, and the patterning of this response was once more related to later performance, this time in a test of recognition of known words at 16 months. Hence, with a variety of measures, we see that a consistently observed ERP effect of word segmentation serves as a direct predictor of the degree of later language development.
  • Cutler, A. (2010). How the native language shapes listening to speech. LOT Winter School. Amsterdam, 2010-01-18 - 2010-01-22.
  • Cutler, A. (2010). How the native language shapes listening to speech. LOT Winter School 2010, Amsterdam, Free University (VU). Amsterdam, the Netherlands, 2010-01-18 - 2010-01-22.
  • Cutler, A. (2010). Native listening: How the native language shapes listening to speech. Talk presented at Cognitive Neuroscience: New Challenges and Future Developments: BCBL Scientific opening ceremony congress. Basque Center on Cognition, Brain and Language, San Sebastian. 2010-05-21.
  • Cutler, A. (2010). The continuity of speech, and the continuous development of listeners' ability to deal with it. Talk presented at CSCA Lecture [Cognitive Science Center Amsterdam]. University of Amsterdam, The Netherlands. 2010-03-17.

    Abstract

    Speech is a continuous stream. Listeners can only make sense of speech by identifying the components that comprise it - words. Segmenting speech into words is an operation which has to be learned very early, since it is how infants compile even their initial vocabulary. Infants' relative success at achieving speech segmentation in fact turns out to be a direct predictor of language skills during later development. Adult listeners segment speech so efficiently, however, that they are virtually never aware of the operation of segmentation. In part they achieve this level of efficiency by exploiting accrued knowledge of relevant structure in the native language. Amassing this language-specific knowledge also starts in infancy. However, some relevant features call on more advanced levels of language processing ability; the continuous refinement of segmentation skills is apparent in that these structural features are exploited for segmentation too, even when applying them means overturning otherwise universal constraints available in infancy.
  • Cutler, A. (2010). Speech segmentation and its payoffs [Colloquium]. Talk presented at The Australian National University. Canberra. 2010-07-23.

    Abstract

    Speech is a continuous stream. Listeners can only make sense of speech by identifying the components that comprise it - words. Segmenting speech into words is an operation which has to be learned very early, since it is how infants compile even their initial vocabulary. Evidence from new behavioural and electrophysiological studies of infant speech perception illustrates this learning process. Infants’ relative success at achieving speech segmentation in fact turns out to be a direct predictor of language skills during later development. Adult listeners segment speech so efficiently, however, that they are virtually never aware of the operation of segmentation. In part they achieve this level of efficiency by exploiting accrued knowledge of relevant structure in the native language. Amassing this language-specific knowledge also starts in infancy. However, some relevant features call on more advanced levels of language processing ability; the continuous refinement of segmentation efficiency is apparent in that (as revealed by adult listening studies across a dozen or so languages) these structural features are exploited for segmentation too, even if applying them means overturning constraints used, perhaps universally, by infants.
  • Cutler, A. (2010). The lowest-level orthographic effect in spoken-word recognition so far? [Keynote lecture]. Talk presented at The Workshop on "Brain, Speech and Orthography". Université Libre de Bruxelles. 2010-10-15.
  • Junge, C., Cutler, A., & Hagoort, P. (2010). Dynamics of early word learning in nine-month-olds: An ERP study. Poster presented at FENS forum 2010 - 7th FENS Forum of European Neuroscience, Amsterdam, The Netherlands.

    Abstract

    What happens in the brain when infants are learning the meaning of words? Only a few studies (Torkildsen et al., 2008; Friedrich & Friederici, 2008) addressed this question, but they focused only on novel word learning, not on the acquisition of infant first words. From behavioral research we know that 12-month-olds can recognize novel exemplars of early typical word categories, but only after training them from nine months on (Schafer, 2005). What happens in the brain during such a training? With event-related potentials, we studied the effect of training context on word comprehension. We manipulated the type/token ratio of the training context (one versus six exemplars). 24 normal-developing Dutch nine-month-olds (+/- 14 days, 12 boys) participated. Twenty easily depictive words were chosen based on parental vocabulary reports for 15-months-olds. All trials consisted of a high-resolution photograph shown for 2200ms, with an acoustic label presented at 1000ms. Each training-test block contrasted two words that did not share initial phonemes or semantic class. The training phase started with six trials of one category, followed by six trials of the second category. Results show more negative responses for the more frequent pairings, consistent with word familiarization studies in older infants (Torkildsen et al., 2008; Friedrich & Friederici, 2008). This increase appears to be larger if the pictures changed. In the test phase we tested word comprehension for novel exemplars with the picture-word mismatch paradigm. Here, we observed a similar N400 as Mills et al. (2005) did for 13-month-olds. German 12-month-olds, however, did not show such an effect (Friedrich & Friederici, 2005). Our study makes it implausible that the latter is due to an immaturity of the N400 mechanism. The N400 was present in Dutch 9-month-olds, even though some parents judged their child not to understand most of the words. There was no interaction by training type, suggesting that type/token ratio does not affect infant word recognition of novel exemplars.
  • Junge, C., Hagoort, P., & Cutler, A. (2010). Early word learning in nine-month-olds: Dynamics of picture-word priming. Talk presented at 8th Sepex conference / 1st Joint conference of the EPS and SEPEX. Granada, Spain. 2010-04.

    Abstract

    How do infants learn words? Most studies focus on novel word learning to address this question. Only a few studies concentrate on the stage when infants learn their first words. Schafer (2005) showed that 12‐month‐olds can recognize novel exemplars of early typical word categories, but only after training them from nine months on. What happens in the brain during such a training? With event‐related potentials, we studied the effect of training context on word comprehension. 24 Normal‐developing Dutch nine‐month‐olds (± 14 days, 12 boys) participated. Twenty easily depictive words were chosen based on parental vocabulary reports for 15‐months‐olds. All trials consisted of a high‐resolution photograph shown for 2200ms, with an acoustic label presented at 1000ms. Each training‐test block contrasted two words that did not share initial phonemes or semantic class. The training phase started with six trials of one category, followed by six trials of the second category. We manipulated the type/token ratio of the training context (one versus six exemplars). Results show more negative responses for the more frequent pairings, consistent with word familiarization studies in older infants (Torkildsen et al., 2008; Friedrich & Friederici, 2008). This increase appears to be larger if the pictures changed. In the test phase we tested word comprehension for novel exemplars with the picture‐word mismatch paradigm. Here, we observed a similar N400 as Mills et al. (2005) did for 13‐month‐olds. German 12‐month‐olds, however, did not show such an effect (Friedrich & Friederici, 2005). Our study makes it implausible that the latter is due to an immaturity of the N400 mechanism. The N400 was present in Dutch 9‐month‐olds, even though some parents judged their child not to understand most of the words. There was no interaction by training type, suggesting that type/token ratio does not affect infants’ word recognition of novel exemplars.
  • Junge, C., Hagoort, P., & Cutler, A. (2010). Early word segmentation ability and later language development: Insight from ERP's. Talk presented at Child Language Seminar 2010. London. 2010-06-24 - 2010-06-26.
  • Junge, C., Hagoort, P., & Cutler, A. (2010). Early word segmentation ability is related to later word processing skill. Poster presented at XVIIIth Biennial International Conference on Infant Studies, Baltimore, MD.
  • Tuinman, A., & Cutler, A. (2010). Casual speech processes: L1 knowledge and L2 speech perception. Talk presented at Sixth International Symposium on the Acquisition of Second Language Speech [New Sounds 2010]. Poznań, Poland. 2010-05-01 - 2010-05-03.
  • Cutler, A. (2009). Cognitive psychology and women's careers. Talk presented at CBU Women in Science Symposium. MRC Cognition and Brain Sciences Unit, Cambridge, UK. 2009-10-28.
  • Cutler, A. (2009). Another thing the native language does for you. Talk presented at Workshop, MARCS Auditory Laboratories 10th Anniversary. Sydney. 2009-07-01.
  • Cutler, A. (2009). L2 speech perception: Some conclusions not to jump to. Talk presented at Sound to Sense Workshop on Issues in L2 Speech. University of the Basque Country, Vitoria, Spain. 2009-09-18.
  • Cutler, A. (2009). Second language acquisition. Topic: Segmenting speech. Linguistics unit in the Bachelor of Arts. School of Humanities and Languages, Sydney, 2009-08.
  • Cutler, A. (2009). Variation induces native listening [keynote lecture]. Talk presented at Australian Linguistic Society Annual Conference: Advances in Linguistic Typology (ALS 2009). Melbourne, Australia. 2009-07-11.
  • Junge, C., Cutler, A., & Hagoort, P. (2009). Word segmentation at ten months and word processing at 16 months. Poster presented at Neurobilingualism: Bilingual functioning from infancy to adulthood, Bangor University, Wales, UK.
  • Weber, A., & Cutler, A. (2007). Knowing what you cannot hear: Knowledge sources for lexical representations in asymmetric bilingualism. Talk presented at 3rd Annual Rovereto Workshop on Bilingualism. Rovereto, Italy. 2007-09-20 - 2007-09-23.

    Abstract

    Early-acquired vocabulary is (of necessity) mostly based on auditory input. But for later-acquired vocabulary (in an L1 or L2), other knowledge sources can also be tapped. Where phonetic contrasts in an L2 cannot be adequately distinguished, the level of detail provided by these sources at the lexical level may be greater than prelexical processing is capable of matching.
  • Weber, A., Cutler, A., Escudero, P., & Hayes-Harb, R. (2007). Exploring the phonological representations in the L2 lexicon. Talk presented at Workshop on Language Processing in First and Second Language Learners (MPI for Psycholinguistics). Nijmegen, The Netherlands. 2007-11-23 - 2007-11-24.
  • Weber, A., & Cutler, A. (2002). Phonetic discrimination and non-native spoken-word recognition. Poster presented at 143th meeting of the Acoustical Society of America, Pittsburgh, PA.

    Abstract

    Words sharing initial segments are briefly activated during the recognition of spoken words. For example, given the input panda, English listeners will initially activate panda and panic among other candidates, which will then compete against each other for recognition. However, in a non-native language, listeners may be less accurate in processing phonemes. This may in turn influence competitor activation in nonnative listening.
  • Broersma, M., & Cutler, A. (2001). Comprehension of non-native speech: Inaccurate phoneme processing and activation of lexical competitors. Poster presented at 3rd Tutorials in Behavioral and Brain Sciences, Berg en Dal, The Netherlands.

Share this page