James McQueen

Presentations

Displaying 1 - 100 of 102
  • Ekerdt, C., Menks, W. M., Janzen, G., Kidd, E., Lemhöfer, K., McQueen, J. M., & Fernández, G. (2024). Does the way language knowledge accumulates over time change with age?. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
  • Uluşahin, O., Bosker, H. R., Meyer, A. S., & McQueen, J. M. (2024). Existing talker information may hinder convergence in synchronous speech. Talk presented at Psycholinguistics in Flanders (PiF 2024). Brussels, Belgium. 2024-05-27 - 2024-05-28.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Existing talker knowledge may make convergence more difficult. Poster presented at the IMPRS Conference 2024, Nijmegen, the Netherlands.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Knowledge of a talker’s f0 affects subsequent perception of voiceless fricatives. Poster presented at Speech Prosody 2024, Leiden, The Netherlands.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Existing talker information may hinder convergence in synchronous speech. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
  • Wyman, N., Koning, M., Menks, W. M., Ekerdt, C., Fernández, G., Janzen, G., Kidd, E., Lemhöfer, K., & McQueen, J. M. (2024). Learning a new grammar in children and adults: Relationship between brain function and structure. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
  • Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2023). Syllable rate drives rate normalization, but is not the only factor. Poster presented at the 20th International Congress of the Phonetic Sciences (ICPhS 2023), Prague, Czech Republic.
  • Severijnen, G., Bosker, H. R., & McQueen, J. M. (2023). Listeners prioritize acoustic information over orthographic information in rate normalization. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Severijnen, G. G., Bosker, H. R., & McQueen, J. M. (2023). Individual differences in lexical stress in Dutch: An examination of cue weighting in production. Talk presented at the 5th Phonetics and Phonology in Europe Conference (PaPE 2023). Nijmegen, The Netherlands. 2023-06-02 - 2023-06-04.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). No evidence for convergence to sub-phonemic F2 shifts in shadowing. Poster presented at the 20th International Congress of the Phonetic Sciences (ICPhS 2023), Prague, Czech Republic.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). The influence of contextual and talker F0 information on fricative perception. Poster presented at the 5th Phonetics and Phonology in Europe Conference (PaPE 2023), Nijmegen, The Netherlands.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). Listeners converge to fundamental frequency in synchronous speech. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.

    Abstract

    Convergence broadly refers to interlocutors’ tendency to progressively sound more like each other over time. Recent empirical work has used various experimental paradigms to observe convergence in voice fundamental frequency (f0). One study used stable mean f0 over trials in a synchronous speech task with manipulated (i.e., high and low) f0 conditions (Bradshaw & McGettigan, 2021). Here, we attempted to replicate this study in Dutch. First, in a reading task, participants read 40 sentences at their own pace to establish f0 baselines. Later, in a synchronous speech task, participants read 80 sentences in synchrony with a speaker whose voice was manipulated ±2st above or below (i.e., for the high and low f0 conditions, respectively) a reference mean f0 value. The reference mean f0 value and the manipulation size were obtained across multiple pre-tests. Our results revealed that the f0 manipulation significantly predicted f0 convergence in both high f0 and low f0 conditions. Furthermore, the proportion of convergers in the sample was larger than those reported by Bradshaw & McGettigan, highlighting the benefits of stimulus optimization. Our study thus provides stronger evidence that the pitch of two talkers tends to converge as they speak together.
  • Hintz, F., Voeten, C. C., McQueen, J. M., & Meyer, A. S. (2022). Quantifying the relationships between linguistic experience, general cognitive skills and linguistic processing skills. Talk presented at the 44th Annual Meeting of the Cognitive Science Society (CogSci 2022). Toronto, Canada. 2022-07-27 - 2022-07-30.
  • Hintz, F., McQueen, J. M., & Meyer, A. S. (2022). The principal dimensions of speaking and listening skills. Talk presented at the 22nd Conference of the European Society for Cognitive Psychology (ESCOP 2022). Lille, France. 2022-08-29 - 2022-09-01.
  • Severijnen, G. G., Bosker, H. R., & McQueen, J. M. (2022). Acoustic correlates of Dutch lexical stress re-examined: Spectral tilt is not always more reliable than intensity. Talk presented at Speech Prosody 2022. Lisbon, Portugal. 2022-05-23 - 2022-05-26.
  • Severijnen, G., Bosker, H. R., & McQueen, J. M. (2022). How do “VOORnaam” and “voorNAAM” differ between talkers? A corpus analysis of individual talker differences in lexical stress in Dutch. Poster presented at the 18th Conference on Laboratory Phonology (LabPhon 18), online.
  • Takashima, A., Hintz, F., McQueen, J. M., Meyer, A. S., & Hagoort, P. (2022). The neuronal underpinnings of variability in language skills. Talk presented at the 22nd Conference of the European Society for Cognitive Psychology (ESCOP 2022). Lille, France. 2022-08-29 - 2022-09-01.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2022). Both contextual and talker-bound F0 information affect voiceless fricative perception. Talk presented at De Dag van de Fonetiek. Utrecht, The Netherlands. 2022-12-16.
  • Bujok, R., Bultena, S., McQueen, J. M., & Broersma, M. (2021). Accent adaptation through error-based learning. Talk presented at EDLL 2021 - International Conference on Error-Driven Learning in Language. Tübingen, Germany. 2021-03-10 - 2021-03-12.
  • Hintz, F., Voeten, C. C., McQueen, J. M., & Scharenborg, O. (2021). Effects of masking position on the time course of spoken word comprehension in noise. Talk presented at the 43rd Annual Meeting of the Cognitive Science Society (CogSci 2021). Vienna, Austria. 2021-07-26 - 2021-07-29.
  • Hintz, F., Voeten, C. C., Isakoglou, C., McQueen, J. M., & Meyer, A. S. (2021). Individual differences in language ability: Quantifying the relationships between linguistic experience, general cognitive skills and linguistic processing skills. Talk presented at the 34th Annual CUNY Conference on Human Sentence Processing (CUNY 2021). Philadelphia, USA. 2021-03-04 - 2021-03-06.
  • Severijnen, G., Bosker, H. R., & McQueen, J. M. (2020). The role of talker-specific prosody in predictive speech perception. Poster presented at the 26th Architectures and Mechanisms for Language Processing Conference (AMLap 2020), Potsdam, Germany.
  • Garg, A., Piai, V., Takashima, A., McQueen, J. M., & Roelofs, A. (2019). Linking production and comprehension – Investigating the lexical interface. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Hintz, F., Jongman, S. R., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2019). Assessing individual differences in language processing: A novel research tool. Talk presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019). Tenerife, Spain. 2019-09-25 - 2019-09-28.

    Abstract

    Individual differences in language processing are prevalent in our daily lives. However, for decades, psycholinguistic research has largely ignored variation in the normal range of abilities. Recently, scientists have begun to acknowledge the importance of inter-individual variability for a comprehensive characterization of the language system. In spite of this change of attitude, empirical research on individual differences is still sparse, which is in part due to the lack of a suitable research tool. Here, we present a novel battery of behavioral tests for assessing individual differences in language skills in younger adults. The Dutch prototype comprises 29 subtests and assesses many aspects of language knowledge (grammar and vocabulary), linguistic processing skills (word and sentence level) and general cognitive abilities involved in using language (e.g., WM, IQ). Using the battery, researchers can determine performance profiles for individuals and link them to neurobiological or genetic data.
  • Jager, L., Witteman, J., McQueen, J. M., & Schiller, N. O. (2019). Can brain potentials reflect L2 learning potential?. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Mickan, A., McQueen, J. M., & Lemhöfer, K. (2019). New in, old out? Does learning a new foreign language make you forget previously learned foreign languages?. Talk presented at the third Vocab@ conference. Leuven, Belgium. 2019-07-01 - 2019-07-03.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.

    Abstract

    When talking, speakers continuously monitor the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. For example, when speakers hear themselves at a higher pitch than intended, they would compensate by lowering their pitch. However, sometimes speakers follow the perturbation instead (i.e., raising their pitch in response to higher-than-expected pitch). Current theoretical frameworks cannot account for following responses. In the current study, we performed two experiments to investigate whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. Participants vocalized while the pitch in their auditory feedback was briefly (500 ms) perturbed in half of the vocalizations. None of the participants were aware of these manipulations. Subsequently, we analyzed the pitch contour of the participants’ vocalizations. The results suggest that whether a perturbation-related response is opposing or following unexpected feedback depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. In addition, the results show that all speakers show both following and opposing responses, although the distribution of response types varies across individuals. Both the interaction with ongoing fluctuations and the non-trivial number of following responses suggest that current speech production models are inadequate. More generally, the current study indicates that looking beyond the average response can lead to a more complete view on the nature of feedback processing in motor control. Future work should explore whether the direction of feedback-based control in domains outside of speech production will also be conditional on the state of the motor system at the time of the perturbation.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Talk presented at Psycholinguistics in Flanders (PiF 2018). Ghent, Belgium. 2018-06-04 - 2018-06-05.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. Auditory feedback processing has been studied using perturbed auditory feedback. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. For example, when speakers hear themselves at a higher pitch than intended, they would compensate by lowering their pitch. However, sometimes speakers follow the perturbation instead (i.e., raising their pitch in response to higher-than-expected pitch). Although most past studies observe some following responses, current theoretical frameworks cannot account for following responses. In addition, recent experimental work has suggested that following responses may be more common than has been assumed to date.
    In the current study, we performed two experiments (N = 39 and N = 24) to investigate whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. Participants vocalized while they tried to match a target pitch level. Meanwhile, the pitch in their auditory feedback was briefly (500 ms) perturbed in half of the vocalizations, increasing or decreasing pitch by 25 cents. None of the participants were aware of these manipulations. Subsequently, we analyzed the pitch contour of the participants’ vocalizations.
    The results suggest that whether a perturbation-related response is opposing or following unexpected feedback depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. In addition, the results show that all speakers show both following and opposing responses, although the distribution of response types varies across individuals.
    Both the interaction with ongoing fluctuations of the speech system and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation. More generally, the current study indicates that looking beyond the average response can lead to a more complete view on the nature of feedback processing in motor control. Future work should explore whether the direction of feedback-based control in domains outside of speech production will also be conditional on the state of the motor system at the time of the perturbation.
  • Goriot, C., Broersma, M., Van Hout, R., Unsworth, S., & McQueen, J. M. (2018). Are Dutch children able to distinguish between English phonetic contrasts? A comparison between monolingual children, early-English pupils, and bilinguals. Poster presented at the 2nd International Symposium on Bilingual and L2 Processing in Adults and Children, Braunschweig, Germany.
  • Hintz, F., Jongman, S. R., McQueen, J. M., & Meyer, A. S. (2018). Individual differences in word production: Evidence from students with diverse educational backgrounds. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
  • Hintz, F., Jongman, S. R., Dijkhuis, M., Van 't Hoff, V., Damian, M., Schröder, S., Brysbaert, M., McQueen, J. M., & Meyer, A. S. (2018). STAIRS4WORDS: A new adaptive test for assessing receptive vocabulary size in English, Dutch, and German. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2018), Berlin, Germany.
  • Hintz, F., Jongman, S. R., McQueen, J. M., & Meyer, A. S. (2018). Verbal and non-verbal predictors of word comprehension and word production. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2018), Berlin, Germany.
  • Mickan, A., McQueen, J. M., Piai, V., & Lemhöfer, K. (2018). Neural correlates of between-language competition in foreign language attrition. Poster presented at the Tenth Annual Meeting of the Society for the Neurobiology of Language (SNL 2018), Quebec, Canada.
  • Mickan, A., Lemhöfer, K., & McQueen, J. M. (2018). The role of between-language competition in foreign language attrition. Poster presented at the 28th Conference of the European Second Language Association (EuroSLA 28), Münster, Germany.
  • Mickan, A., Lemhöfer, K., & McQueen, J. M. (2018). The role of between-language competition in foreign language attrition. Poster presented at the International Meeting of the Psychonomics Society 2018, Amsterdam, The Netherlands.
  • Dai, B., Kösem, A., McQueen, J. M., Jensen, O., & Hagoort, P. (2017). Linguistic information of distracting speech modulates neural entrainment to target speech. Poster presented at the 47th Annual Meeting of the Society for Neuroscience (SfN), Washington, DC, USA.
  • Dai, B., Kösem, A., McQueen, J. M., Jensen, O., & Hagoort, P. (2017). Linguistic information of distracting speech modulates neural entrainment to target speech. Poster presented at the 13th International Conference for Cognitive Neuroscience (ICON), Amsterdam, The Netherlands.
  • Franken, M. K., Eisner, F., Schoffelen, J.-M., Acheson, D. J., Hagoort, P., & McQueen, J. M. (2017). Audiovisual recalibration of vowel categories. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.

    Abstract

    One of the most daunting tasks of a listener is to map a continuous auditory stream onto known speech sound categories and lexical items. A major issue with this mapping problem is the variability in the acoustic realizations of sound categories, both within and across speakers. Past research has suggested listeners may use various sources of information, such as lexical knowledge or visual cues (e.g., lip-reading) to recalibrate these speech categories to the current speaker. Previous studies have focused on audiovisual recalibration of consonant categories. The present study explores whether vowel categorization, which is known to show less sharply defined category boundaries, also benefit from visual cues.
    Participants were exposed to videos of a speaker pronouncing one out of two vowels (Dutch vowels /e/ and /ø/), paired with audio that was ambiguous between the two vowels. The most ambiguous vowel token was determined on an individual basis by a categorization task at the beginning of the experiment. In one group of participants, this auditory token was paired with a video of an /e/ articulation, in the other group with an /ø/ video. After exposure to these videos, it was found in an audio-only categorization task that participants had adapted their categorization behavior as a function of the video exposure. The group that was exposed to /e/ videos showed a reduction of /ø/ classifications, suggesting they had recalibrated their vowel categories based on the available visual information. These results show that listeners indeed use visual information to recalibrate vowel categories, which is in line with previous work on audiovisual recalibration in consonant categories, and lexically-guided recalibration in both vowels and consonants.
    In addition, a secondary aim of the current study was to explore individual variability in audiovisual recalibration. Phoneme categories vary not only in terms of boundary location, but also in terms of boundary sharpness, or how strictly categories are distinguished. The present study explores whether this sharpness is associated with the amount of audiovisual recalibration. The results tentatively support that a fuzzy boundary is associated with stronger recalibration, suggesting that listeners’ category sharpness may be related to the weight they assign to visual information in audiovisual speech perception. If listeners with fuzzy boundaries assign more weight to visual cues, given that vowel categories have less sharp boundaries than consonants, there ought to be audiovisual recalibration for vowels as well. This is exactly what was found in the current study.
  • Goriot, C., Van Hout, R., Broersma, M., Unsworth, S., & McQueen, J. M. (2017). Executive functioning in early bilinguals, second language learners and monolinguals: does language balance play a role?. Talk presented at the 5th Barcelona Summer School on Bilingualism and Multilingualism. Barcelona, Spain. 2017-09-12 - 2017-09-15.
  • Goriot, C., Broersma, M., Van Hout, R., McQueen, J. M., & Unsworth, S. (2017). Perception of English speech sounds among Dutch primary-school pupils: A comparison between early-English and control school pupils. Talk presented at the Conference on Multilingualism (COM 2017). Groningen, The Netherlands. 2017-11-06 - 2017-11-08.
  • Krutwig, J., Sadakata, M., Garcia-Cossio, E., Desain, P., & McQueen, J. M. (2017). Perception and production interactions in non-native speech category learning: Between neural and behavioural signatures. Talk presented at Psycholinguistics in Flanders (PiF 2017). Leuven, Belgium. 2017-05-29 - 2017-05-30.
  • Mickan, A., Lemhöfer, K., & McQueen, J. M. (2017). Is foreign language attrition a special case of retrieval-induced forgetting?. Poster presented at the 59th Conference of Experimental Psychologists (TeaP 2017), Dresden, Germany.
  • Dai, B., Kösem, A., McQueen, J. M., & Hagoort, P. (2016). Pure linguistic interference during comprehension of competing speech signals. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    In certain situations, human listeners have more difficulty in understanding speech in a multi-talker environment than in the presence of non-intelligible noise. The costs of speech-in-speech masking have been attributed to informational masking, i.e. to the competing processing of the target and the distractor speech’s information. It remains unclear what kind of information is competing, as intelligible speech and unintelligible speech-like signals (e.g. reversed, noise-vocoded, and foreign speech) differ both in linguistic content and in acoustic information. Thus, intelligible speech could be a stronger distractor than unintelligible speech because it presents closer acoustic information to the target speech, or because it carries competing linguistic information. In this study, we intended to isolate the linguistic component of speech-in-speech masking and we tested its influence on the comprehension of target speech. To do so, 24 participants performed a dichotic listening task in which the interfering stimuli consisted of 4-band noise-vocoded sentences that could become intelligible through training. The experiment included three steps: first, the participants were instructed to report the clear target speech from a mixture of one clear speech channel and one unintelligible noise-vocoded speech channel; second, they were trained on the interfering noise-vocoded sentences so that they became intelligible; third, they performed the dichotic listening task again. Crucially, before and after training, the distractor speech had the same acoustic features but not the same linguistic information. We thus predicted that the distracting noise-vocoded signal would interfere more with target speech comprehension after training than before training. To control for practice/fatigue effects, we used additional 2-band noise-vocoded sentences, which participants were not trained on, as interfering signals in the dichotic listening tasks. We expected that performance on these trials would not change after training, or would change less than that on trials with trained 4-band noise-vocoded sentences. Performance was measured under three SNR conditions: 0, -3, and -6 dB. The behavioral results are consistent with our predictions. The 4-band noise-vocoded signal interfered more with the comprehension of target speech after training (i.e. when it was intelligible) compared to before training (i.e. when it was unintelligible), but only at SNR -3dB. Crucially, the comprehension of the target speech did not change after training when the interfering signals consisted of unintelligible 2-band noise-vocoded speech sounds, ruling out a fatigue effect. In line with previous studies, the present results show that intelligible distractors interfere more with the processing of target speech. These findings further suggest that speech-in-speech interference originates, to a certain extent, from the parallel processing of competing linguistic content. A magnetoencephalography study with the same design is currently being performed, to specifically investigate the neural origins of informational masking.
  • Dai, B., Kösem, A., McQueen, J. M., & Hagoort, P. (2016). Pure linguistic interference during comprehension of competing speech signals. Poster presented at the 8th Speech in Noise Workshop (SpiN), Groningen, The Netherlands.
  • Franken, M. K., Schoffelen, J.-M., McQueen, J. M., Acheson, D. J., Hagoort, P., & Eisner, F. (2016). Neural correlates of auditory feedback processing during speech production. Poster presented at New Sounds 2016: 8th International Conference on Second-Language Speech, Aarhus, Denmark.

    Abstract

    An important aspect of L2 speech learning is the interaction between speech production and perception. One way to study this interaction is to provide speakers with altered auditory feedback to investigate how unexpected auditory feedback affects subsequent speech production. Although it is generally well established that speakers on average compensate for auditory feedback perturbations, even when unaware of the manipulation, the neural correlates of responses to perturbed auditory feedback are not well understood. In the present study, we provided speakers with auditory feedback that was intermittently pitch-shifted, while we measured the speaker’s neural activity using magneto-encephalography (MEG). Participants were instructed to vocalize the Dutch vowel /e/ while they tried to match the pitch of a short tone. During vocalization, participants received auditory feedback through headphones. In half of the trials, the pitch in the feedback signal was shifted by -25 cents, starting at a jittered delay after speech onset and lasting for 500ms. Trials with perturbed feedback and control trials (with normal feedback) were in random order. Post-experiment questionnaires showed that none of the participants was aware of the pitch manipulation. Behaviorally, the results show that participants on average compensated for the auditory feedback by shifting the pitch of their speech in the opposite (upward) direction. This suggests that even though participants were not aware of the pitch shift, they automatically compensate for the unexpected feedback signal. The MEG results show a right-lateralized response to both onset and offset of the pitch perturbation during speaking. We suggest this response relates to detection of the mismatch between the predicted and perceived feedback signals, which could subsequently drive behavioral adjustments. These results are in line with recent models of speech motor control and provide further insights into the neural correlates of speech production and speech feedback processing.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2016). Neural mechanisms underlying auditory feedback processing during speech production. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-23 - 2016-11-24.

    Abstract

    Speech production is one of the most complex motor skills, and involves close interaction between perceptual and motor systems. One way to investigate this interaction is to provide speakers with manipulated auditory feedback during speech production. Using this paradigm, investigators have started to identify a neural network that underlies auditory feedback processing and monitoring during speech production. However, to date, still little is known about the neural mechanisms that underlie feedback processing. The present study set out to shed more light on the neural correlates of processing auditory feedback. Participants (N = 39) were seated in an MEG scanner and were asked to vocalize the vowel /e/continuously throughout each trial (of 4 s) while trying to match a pre-specified pitch target of 4, 8 or 11 semitones above the participants’ baseline pitch level. They received auditory feedback through ear plugs. In half of the trials, the pitch in the auditory feedback was unexpectedly manipulated (raised by 25 cents) for 500 ms, starting between 500ms and 1500ms after speech onset. In the other trials, feedback was normal throughout the trial. In a second block of trials, participants listened passively to recordings of the auditory feedback they received during vocalization in the first block. Even though none of the participants reported being aware of any feedback perturbations, behavioral responses showed that participants on average compensated for the feedback perturbation by decreasing the pitch in their vocalizations, starting at about 100ms after perturbation onset until about 100 ms after perturbation offset. MEG data was analyzed, time-locked to the onset of the feedback perturbation in the perturbation trials, and to matched time-points in the control trials. A cluster-based permutation test showed that the event-related field responses differed between the perturbation and the control condition. This difference was mainly driven by an ERF response peaking at about 100ms after perturbation onset and a larger response after perturbation offset. Both these were localized to sensorimotor cortices, with the effect being larger in the right hemisphere. These results are in line with previous reports of right-lateralized pitch processing. In the passive listening condition, we found no differences between the perturbation and the control trials. This suggests that the ERF responses were not merely driven by the pitch change in the auditory input and hence instead reflect speech production processes. We suggest the observed ERF responses in sensorimotor cortex are an index of the mismatch between the self-generated forward model prediction of auditory input and the incoming auditory signal.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2016). Neural mechanisms underlying auditory feedback processing during speech production. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Speech production is one of the most complex motor skills, and involves close interaction between perceptual and motor systems. One way to investigate this interaction is to provide speakers with manipulated auditory feedback during speech production. Using this paradigm, investigators have started to identify a neural network that underlies auditory feedback processing and monitoring during speech production. However, to date, still little is known about the neural mechanisms that underlie feedback processing. The present study set out to shed more light on the neural correlates of processing auditory feedback. Participants (N = 39) were seated in an MEG scanner and were asked to vocalize the vowel /e/continuously throughout each trial (of 4 s) while trying to match a pre-specified pitch target of 4, 8 or 11 semitones above the participants’ baseline pitch level. They received auditory feedback through ear plugs. In half of the trials, the pitch in the auditory feedback was unexpectedly manipulated (raised by 25 cents) for 500 ms, starting between 500ms and 1500ms after speech onset. In the other trials, feedback was normal throughout the trial. In a second block of trials, participants listened passively to recordings of the auditory feedback they received during vocalization in the first block. Even though none of the participants reported being aware of any feedback perturbations, behavioral responses showed that participants on average compensated for the feedback perturbation by decreasing the pitch in their vocalizations, starting at about 100ms after perturbation onset until about 100 ms after perturbation offset. MEG data was analyzed, time-locked to the onset of the feedback perturbation in the perturbation trials, and to matched time-points in the control trials. A cluster-based permutation test showed that the event-related field responses differed between the perturbation and the control condition. This difference was mainly driven by an ERF response peaking at about 100ms after perturbation onset and a larger response after perturbation offset. Both these were localized to sensorimotor cortices, with the effect being larger in the right hemisphere. These results are in line with previous reports of right-lateralized pitch processing. In the passive listening condition, we found no differences between the perturbation and the control trials. This suggests that the ERF responses were not merely driven by the pitch change in the auditory input and hence instead reflect speech production processes. We suggest the observed ERF responses in sensorimotor cortex are an index of the mismatch between the self-generated forward model prediction of auditory input and the incoming auditory signal.
  • Goriot, C., Broersma, M., Van Hout, R., McQueen, J. M., & Unsworth, S. (2017). De relatie tussen vroeg vreemdetalenonderwijs en de ontwikkeling van het fonologisch bewustzijn. Talk presented at the Grote Taaldag 2017. Utrecht, The Netherlands. 2017-02-04.
  • Goriot, C., Broersma, M., McQueen, J. M., Unsworth, S., & Van Hout, R. (2016). L1-effecten in een Engelse woordenschattaak: De PPVT-4. Talk presented at the Grote Taaldag. Utrecht, The Netherlands. 2016-02-06.
  • Goriot, C., Van Hout, R., Broersma, M., McQueen, J. M., & Unsworth, S. (2016). Is there an effect of early-English education on the development of pupils’ executive functions?. Talk presented at Psycholinguistics in Flanders (PiF 2016). Antwerp, Belgium. 2016-05-26 - 2016-05-27.
  • Goriot, C., Van Hout, R., Broersma, M., McQueen, J. M., & Unsworth, S. (2016). The relationship between early-English education and executive functions: Balance is key. Talk presented at the Conference on Multilingualism (COM 2016). Ghent, Belgium. 2016-09-11 - 2016-09-13.
  • Goriot, C., Van Hout, R., Broersma, M., Unsworth, S., & McQueen, J. M. (2016). The influence of cognates on Dutch pupils’ English vocabulary scores in the Peabody Picture Vocabulary Test. Talk presented at EuroSLA 26. Jyväskylä, Finland. 2016-08-24 - 2016-08-27.
  • Hintz, F., McQueen, J. M., & Scharenborg, O. (2016). Effects of frequency and neighborhood density on spoken-word recognition in noise: Evidence from perceptual identification in Dutch. Talk presented at the 22nd Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2016),. Bilbao, ES. 2016-09-01 - 2016-09-03.
  • Krutwig, J., Sadakata, M., Garcia-Cossio, E., Desain, P., & McQueen, J. M. (2016). Perception and production interactions in non-native speech category learning. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Reaching a native-like level in a second language includes mastering phoneme contrasts that are not distinguished in one’s mother tongue – both in perception and production. This study explores how those two domains interact in the course of learning and how behavioural changes in both listening and speaking ability are related to traceable changes in the brain. Unravelling the processes underlying speech category learning could guide the design of more efficient training methods. Production and perception processes could support each other during learning, or they could interfere with each other. Baese-Berk et al. (2010), for instance, observed delayed learning when perceptual training was combined with production practice compared to perception-only training. These results could indicate perception-production interference but could also be explained by differences in cognitive load between the two conditions. In order to disentangle the added value of production training in perceptual category learning, we systematically contrasted the combination of perceptual training with either related or unrelated production. Thirty-one native speakers of Dutch distributed between two groups participated in a 4-day high-variability training protocol on the British-English /æ/-/ε/ vowel contrast (multiple words spoken by multiple talkers). In the related production group (n=15) feedback on a perceptual categorisation task was combined with pronouncing the respective correct word on every trial, whereas it was combined with pronouncing a matched but phonologically unrelated set of words in the unrelated production group (n=16). Cognitive load was matched between groups. Pre- and post-training measurements were taken of both perceptual abilities (in an identification task, an identification task assessing category boundaries on a morphed continuum, and a discrimination task on the same continuum) and production ability (a reading-aloud task with a list of isolated words). All auditory stimulus words during the training were presented according to a classical oddball paradigm, while the electrophysiological activity was recorded continuously. This enabled us to track neural changes in auditory discrimination ability using the mismatch negativity response (MMN). Results indicate that participants’ perceptual ability significantly improved over the course of training. No significant difference in perceptual learning arose between the two groups. Measurements of the distribution of formants F1 and F2 in the words in the production task before and after training (quantified in terms of Mahalanobis distance) showed that participants in both groups significantly improved after training: the two English target vowels became acoustically more distinct. Analyses of the electrophysiological data and of the other behavioural tasks are ongoing and will be presented. The fact that participants’ perceptual ability improved similarly regardless of whether they also practiced the respective productions could be seen as evidence that the perception and production systems for non-native vowels are separate. A more likely explanation, however, is that the added value of practicing the pronunciation of the vowels might have been counteracted – especially early in training - by exposure to sub-optimal utterances as the participants listened to their own voices. In order for production practice to be beneficial for the learner, immediate and informative feedback on production outcomes might be necessary.
  • McQueen, J. M., & Meyer, A. S. (2016). Cognitive architectures [Session Chair]. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing speech production-perception interactions through individual differences. Talk presented at Psycholinguistics in Flanders. Marche-en-Famenne. 2015-05-21 - 2015-05-22.

    Abstract

    This study aims to test recent theoretical frameworks in speech motor control which claim that speech production targets are specified in auditory terms. According to such frameworks, people with better auditory acuity should have more precise speech targets. Participants performed speech perception and production tasks in a counterbalanced order. Speech perception acuity was assessed using an adaptive speech discrimination task, where participants discriminated between stimuli on a /ɪ/-/ɛ/ and a /ɑ/-/ɔ/ continuum. To assess variability in speech production, participants performed a pseudo-word reading task; formant values were measured for each recording of the vowels /ɪ/, /ɛ/, /ɑ/ and /ɔ/ in 288 pseudowords (18 per vowel, each of which was repeated 4 times). We predicted that speech production variability would correlate inversely with discrimination performance. Results confirmed this prediction as better discriminators had more distinctive vowel production targets. In addition, participants with higher auditory acuity produced vowels with smaller within-phoneme variability but spaced farther apart in vowel space. This study highlights the importance of individual differences in the study of speech motor control, and sheds light on speech production-perception interactions.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Assessing the link between speech perception and production through individual differences. Poster presented at International Congress of Phonetic Sciences, Glasgow, UK.

    Abstract

    This study aims to test a prediction of recent
    theoretical frameworks in speech motor control: if
    speech production targets are specified in auditory
    terms, people with better auditory acuity should
    have more precise speech targets.
    To investigate this, we had participants perform
    speech perception and production tasks in a
    counterbalanced order. To assess speech perception
    acuity, we used an adaptive speech discrimination
    task. To assess variability in speech production,
    participants performed a pseudo-word reading task;
    formant values were measured for each recording.
    We predicted that speech production variability to
    correlate inversely with discrimination performance.
    The results suggest that people do vary in their
    production and perceptual abilities, and that better
    discriminators have more distinctive vowel
    production targets, confirming our prediction. This
    study highlights the importance of individual
    differences in the study of speech motor control, and
    sheds light on speech production-perception
    interaction.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Effects of auditory feedback consistency on vowel production. Poster presented at Psycholinguistics in Flanders, Marche-en-Famenne.

    Abstract

    In investigations of feedback control during speech production, researchers have focused on two different kinds of responses to erroneous or unexpected auditory feedback. Compensation refers to online, feedback-based corrections of articulations. In contrast, adaptation refers to long-term changes in the speech production system after exposure to erroneous/unexpected feedback, which may last even after feedback is normal again. In the current study, we aimed to compare both types of feedback responses by investigating the conditions under which the system starts adapting in addition to merely compensating. Participants vocalized long vowels while they were exposed to either consistently altered auditory feedback, or to feedback that was unpredictably either altered or normal. Participants were not aware of the manipulation of auditory feedback. We predicted that both conditions would elicit compensation, whereas adaptation would be stronger when the altered feedback was consistent across trials. The results show that although there seems to be somewhat more adaptation for the consistently altered feedback condition, a substantial amount of individual variability led to statistically unreliable effects at the group level. The results stress the importance of taking into account individual differences and show that people vary widely in how they respond to altered auditory feedback.
  • Franken, M. K., Eisner, F., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2015). Following and Opposing Responses to Perturbed Auditory Feedback. Poster presented at Society for the Neurobiology of Language Annual Meeting 2015, Chicago, IL.
  • Goriot, C., Broersma, M., Unsworth, S., Van Hout, R., & McQueen, J. M. (2015). Does early foreign language education influences pupils' cognitive development?. Poster presented at the LOT summer school 2015, Leuven, Belgium.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2015). Brain areas involved in acquisition and consolidation of novel words with/without concepts across different age groups. Talk presented at the 22nd Annual meeting Society for the Scientific Study of Reading. Hawaii. 2015-07-15 - 2015-07-18.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2015). Consolidation of novel word representation in young adults and children. Talk presented at the Magic Moments Workshop. Nijmegen, the Netherlands. 2015-03-10.
  • Viebahn, M., Buerki, A., McQueen, J. M., Ernestus, M., & Frauenfelder, U. (2015). Learning multiple pronunciation variants of French novel words with orthographic forms. Poster presented at Memory consolidation and word learning workshop, Nijmegen.
  • Bakker, I., Takashima, A., van Hell, J., Janzen, G., & McQueen, J. M. (2014). Brain activation during novel word encoding predicts lexical integration. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.

    Abstract

    Acquisition of a novel word involves the integration of a newly formed representation into the mental lexicon, a process which is thought to benefit from offline consolidation. Brain activity during post-learning sleep has been shown to relate to behavioural measures of lexicalisation (Tamminen et al., 2010; 2013), suggesting that the outcome of acquisition is indeed at least partly determined after encoding. It is however unknown to what degree the neural response during the learning phase itself influences successful lexicalisation. A consistent body of evidence indicates that activation in medial temporal, parietal and frontal areas during encoding predicts subsequent memory strength (Kim, 2011), suggesting that encoding-related factors may also affect offline integration processes. In the present study we combined and extended these two lines of research and asked whether encoding-related neural activity is related to subsequent lexical integration as well as explicit memory. Specifically, we hypothesised that immediate orthographic and semantic integration during the first few encounters with novel words predicts their later ability to interact with existing words. Participants studied 40 novel printed words, each paired with a picture of a common object illustrating its meaning, while their neural responses were measured using functional magnetic resonance imaging. A primed visual lexical decision task was administered approximately 24 hours after encoding. In this task, participants made lexical decisions to existing and pseudo-word targets, which were each preceded by a briefly presented novel word that was either semantically related or unrelated to the target. Faster response times to related versus unrelated pairs suggest that links have been formed between the novel-word representations and their semantic associates. Priming effects can therefore be considered a strong indication that novel words have been lexically integrated. Following the priming task, cued and free recall tasks probed explicit memory for the learned novel words. A significant priming effect was observed, suggesting that those novel words that had been encoded successfully were sufficiently lexicalised to influence recognition of their existing semantic associates. In line with previous findings, words that were correctly recalled in the test session elicited enhanced activation in the left inferior frontal gyrus (IFG) during encoding. Similarly, words that subsequently produced priming effects showed enhanced IFG activation compared to words that had no facilitating effect. Crucially, a set of additional clusters predicted subsequent priming but not memory persistence. These were found in left temporalparietal regions involved in semantic processing, as well as in a posterior portion of the left fusiform gyrus known as the visual word form area (VWFA). These data suggest that increased orthographic and semantic processing during encoding facilitates lexicalisation. We argue that enhanced VWFA activation during encoding reflects the formation and integration of a stable orthographic representation. This enables rapid lexical access to the novel word, which in turn facilitates retrieval of related words and hence boosts their recognition. In conclusion, successful lexicalisation is determined in part by the engagement of encoding mechanisms that stimulate memory integration, above and beyond those supporting memory formation.
  • Franken, M. K., McQueen, J. M., Hagoort, P., & Acheson, D. J. (2014). Assessing the link between speech perception and production through individual differences. Poster presented at the 6th Annual Meeting of the Society for the Neurobiology of Language, Amsterdam.

    Abstract

    This study aims to test a prediction of recent
    theoretical frameworks in speech motor control: if
    speech production targets are specified in auditory
    terms, people with better auditory acuity should
    have more precise speech targets.
    To investigate this, we had participants perform
    speech perception and production tasks in a
    counterbalanced order. To assess speech perception
    acuity, we used an adaptive speech discrimination
    task. To assess variability in speech production,
    participants performed a pseudo-word reading task;
    formant values were measured for each recording.
    We predicted that speech production variability to
    correlate inversely with discrimination performance.
    The results suggest that people do vary in their
    production and perceptual abilities, and that better
    discriminators have more distinctive vowel
    production targets, confirming our prediction. This
    study highlights the importance of individual
    differences in the study of speech motor control, and
    sheds light on speech production-perception
    interaction.
  • Schuerman, W. L., Meyer, A. S., & McQueen, J. M. (2014). Listeners recognize others’ speech better than their own. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, UK.
  • Takashima, A., Bakker, I., van Hell, J. G., Janzen, G., & McQueen, J. M. (2014). Consolidation of newly learned words with or without meanings: fMRI study on young adults. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.

    Abstract

    Declarative memory is considered to entail episodic memory (memory for episodes that are confined to specific spatial and temporal contexts) and semantic memory (memory for generic knowledge or concepts). Although these two types of memories are not independent and they interact extensively, they seem to involve different brain structures at retrieval, with the hippocampus often regarded to be important for retrieving arbitrary associative information encoded in a specific episodic context, whereas widely distributed neocortical areas, especially higher order associative areas, seem to be important in retrieving semantic or conceptual information. In this word-learning study, we asked if there is more involvement of the episodic memory network when retrieval occurs directly after learning, and if there is a shift towards more involvement of the semantic network as the word becomes more de-contextualized with time. Furthermore, we were interested to see the effect of having extra information at encoding, namely, visual information (a picture depicting the word or a definition describing the word) associated with the phonological form of the novel word. Two groups of participants (picture group n=24; definition group n=24) learned phonological novel word forms with meanings (a picture or a definition) or without corresponding meanings (form-only). Participants’ memory for the words was tested in an fMRI scanner directly after training (recent), and again a week later (remote). To test whether novel words were integrated into their lexicon, pause detection and cuedrecall of meaning association tests were administered behaviourally. Retrieval success was greater for meaningful words than for form-only words on both recent and remote tests, with the difference becoming larger at remote test. There was evidence of lexicalization (as measured with the pause detection task) for the meaningful words. In cued recall, although participants were quicker to choose the associated meanings if they were presented in the trained form (identical picture/ definition), there was less slowing down over time for concept associations (similar picture/definition). Imaging results revealed that hippocampal involvement decreased for form-only words in the picture group, whereas for the meaningful words hippocampal involvement was maintained at remote test. Differences between meaningful and form-only words in the remote session were found in a wide range of neocortical areas for successful recognition of the trained words including the fusiform gyrus, medial prefrontal cortex, precuneus and left angular/supramarginal gyrus. Episodic memory decay over time is unavoidable, but meaningful novel words are better retained. These words also interfered more strongly in judgment of similar sounding existing words, and showed less slowing down for cued recall of meaning associations, both indicating more integration and lexicalization for the meaningful novel words. Better memory for meaningful novel words may be due to the use of both the episodic memory network (hippocampus) and the semantic memory network (left fusiform gyrus, left angular/supramarginal gyrus) at remote test.
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2014). Syntactic predictability can facilitate the recognition of casually produced words in connected speech. Poster presented at The 13th Conference on Laboratory Phonology (LabPhon 2014), Tokyo, Japan.
  • Asaridou, S. S., Dediu, D., Takashima, A., Hagoort, P., & McQueen, J. M. (2013). Learning Dutchinese: Functional, structural, and genetic correlates performance. Poster presented at the 3rd Latin American School for Education, Cognitive and Neural Sciences, Ilha de Comandatuba, Brazil.
  • Lai, V. T., Kim, A., & McQueen, J. M. (2013). Sentential context modulates early phases of visual word recognition: Evidence from a training manipulation. Talk presented at the 26th Annual CUNY Conference on Human Sentence Processing [CUNY 2013]. Columbia, SC. 2013-03-21 - 2013-03-23.

    Abstract

    How does sentential context influence visual word recognition? Recent neural models suggest that single words are recognized via a hierarchy of local combination detectors [1]. Low-level features are extracted first by neurons in V1 in the visual cortex, features are then combined and fed into the higher level of letter
    fragments in V2, and then letter shapes in V4, and so on. A recent EEG study examining word recognition in context has shown that contextually-driven anticipation can influence this hierarchy of visual word recognition early on [2]. Specifically, a minor mismatch between the predicted visual word form and the actual input (cake
    vs. ceke) can elicit brain responses ~130 ms after word onset [2].
  • Poellmann, K., McQueen, J. M., Baayen, R. H., & Mitterer, H. (2013). Adaptation to reductions: Challenges of regional variation. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2013). Syntactic predictability facilitates the recognition of words in connected speech. Talk presented at the 18th Meeting of the European Society for Cognitive Psychology (ESCOP). Budapest (Hungary). 2013-08-29 - 2013-09-01.
  • Bakker, I., Takashima, A., van Hell, J., Janzen, G., & McQueen, J. M. (2012). Cross-modal effects on novel word consolidation. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012]. Riva del Garda, Italy. 2012-09-06 - 2012-09-08.

    Abstract

    In line with two-stage models of memory, it has been proposed that memory traces for newly learned words are
    initially dependent on medial temporal structures and acquire neocortical, more lexical representations during the first
    night’s sleep after training (Davis & Gaskell, 2009). Only after sleep-dependent consolidation are novel words fully
    integrated into the lexicon and are therefore able to enter into lexical competition with phonologically overlapping
    existing words. This effect, observable as a slowing down of responses to existing words with a novel competitor, has
    been demonstrated using various tasks including lexical decision, pause detection, semantic judgement, and wordspotting.
  • Poellmann, K., McQueen, J. M., & Mitterer, H. (2012). How talker-adaptation helps listeners recognize reduced word-forms. Talk presented at the 164th Meeting of the Acoustical Society of America. Kansas City, Missouri. 2012-10-22 - 2012-10-26.
  • Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2012). Behavioral and Electrophysiological evidence for early vowel normalization. Talk presented at the 13th NVP Winter Conference on Cognition, Brain, and Behaviour (Dutch Psychonomic Society). Egmond aan Zee, the Netherlands. 2012-12-16 - 2012-12-17.
  • Takashima, A., Bakker, I., Van Hell, J. G., Janzen, G., & McQueen, J. M. (2012). Neural networks involved in retrieval of newly learned words and effect of overnight consolidation - an fMRI study -. Poster presented at the 42nd annual meeting of the Society for Neuroscience (Neuroscience 2012), New Orleans, LA.

    Abstract

    Declarative memory appears to involve two separate systems, with more episodically oriented memories coded in a hippocampal network, and more non-episodic or semantic memories coded in a neocortical network. Previous works (e.g. Dumay & Gaskell, 2007) have shown a role of sleep in the lexicalization of novel words. In line with the two-stage model of memory proposed by McClelland and colleagues (1995), the memory traces for novel words are initially dependent on hippocampal structures. However, a shift towards neocortical representations occurs during the first night’s sleep after training. This shift, or integration of newly learned words into the lexicon (lexicalization) can be observed behaviourally as lexical competition, where novel words slow down recognition of phonologically overlapping known words. To extend understanding of how newly learned words are incorporated into the semantic system, we conducted an fMRI study to elucidate the neural processes underlying sleep-dependent lexicalization, with the additional aim of investigating multimodal information integration in word learning. As a first step towards studying the acquisition of multimodal word meanings, we familiarized subjects with the phonological form of 40 novel words, of which 20 were associated with pictures of novel objects (“picture-associated words”) and 20 were not (“form-only words”). Immediately after training (Day1) and on the following day (Day2), we recorded the BOLD response to auditorily presented “trained novel words”,” untrained novel words” and “existing words”, and administered a lexical competition task to test the effect of novel words on phonologically overlapping existing words. Behavioural data showed enhanced performance in recognition and recall of novel words after sleep, with a greater benefit for picture-associated words. However, lexical competition on Day2 was greater for the form-only words. The fMRI data showed more involvement of the hippocampal network for picture-associated words than for form-only words. In contrast, form-only words activated the semantic memory network already on Day1, whereas this was more apparent on Day2 for picture-associated words. This implies that the consolidation/lexicalization process differs depending on the degree of involvement of the two memory systems, with a greater involvement of the hippocampal system for picture-associated words. Stronger episodic memory traces might slow down the overnight shift of the novel picture-associated words to the lexical network relative to the faster integration into this network of the form-only words.
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2012). Effects of repetition and temporal distance on vowel reduction in spontaneous speech. Poster presented at the 13th Conference on Laboratory Phonology (LabPhon 2012), Stuttgart, Germany.
  • Viebahn, M. C., Ernestus, M., Ernestus, M., & McQueen, J. M. (2012). Co-occurrence of reduced word forms in natural speech. Poster presented at INTERSPEECH 2012: 13th Annual Conference of the International Speech Communication Association, Portland, OR.
  • Viebahn, M., Ernestus, M., & McQueen, J. M. (2012). Co-occurrence of reduced word forms in spontaneous speech. Talk presented at The 11th edition of the Psycholinguistics in Flanders conference (PiF). Berg en Dal, The Netherlands. 2012-06-06 - 2012-06-07.
  • Scharenborg, O., Mitterer, H., & McQueen, J. M. (2011). Perceptual learning and allophonic variation in liquids. Poster presented at The First International Conference on Cognitive Hearing Science for Communication, Linkoping, Sweden.

    Abstract

    Numerous studies have shown that listeners can adapt to idiosyncratic pronunciations through lexically-guided perceptual learning. For instance, an ambiguous sound between [s] and [f] (s/f) will be learned as /s/ if heard in words such as platypus, but as /f/ in words such as giraffe. This learning generalises, so that listeners hear [nais/f] as nice or knife depending on the exposure condition (platypus/f vs. giras/f). Previous research focused on contrasts that differ only in local cues, such as plosives and fricatives. We investigated here whether perceptual learning also occurs for contrasts that differ in nonlocal cues (distributed over the syllable), such as the /l/-/r/ contrast in Dutch (implemented as [l] vs. [ɹ] in the Western part of the Netherlands). Listeners were exposed to an ambiguous [l/ɹ] in Dutch words ending in either /r/ or /l/. The ambiguous sound was created by morphing [əɹ] and [əl] syllables to capture the contrast’s distributed nature. A subsequent test phase revealed a significant difference in /r/-responses to a [əɹ]-[əl] continuum between the groups that learned to interpret the ambiguous sound as either /r/ or /l/. We then went on to test whether learning generalises over allophonic differences. If so, exposure should influence the perception of another implementation of the contrast: that with a trilled /r/ ([ər]-[əl]), tested in both post- and pre-vocalic position (pre-vocalic approximants are not attested in Dutch). Preliminary results show that training effects reduce when different allophones are used during test, suggesting that the learning effect has an allophonic basis.
  • Van Setten, E. R. H., Hell, van, J. G., Witteman, M. J., Weber, A., & McQueen, J. M. (2011). The influence of socio-cultural context and method of stimulus presentation on the processing of Dutch-English code-switches: An experimental study. Poster presented at Workshop on 'Frontiers in linguistics, acquisition and multilingualism studies: Dynamic paradigms', Vaals, Netherlands.
  • Witteman, M. J., Bardhan, N. P., Weber, A., & McQueen, J. M. (2011). Adapting to foreign-accented speech: The role of delay in testing. Poster presented at 162nd meeting of the Acoustical Society of America, San Diego, USA.

    Abstract

    Understanding speech usually seems easy, but it can become noticeably harder when the speaker has a foreign accent. This is because foreign accents add considerable variation to speech. Research on foreign-accented speech shows that participants are able to adapt quickly to this type of variation. Less is known, however, about longer-term maintenance of adaptation. The current study focused on long-term adaptation by exposing native listeners to foreign-accented speech on Day 1, and testing them on comprehension of the accent one day later. Comprehension was thus not tested immediately, but only after a 24 hour period. On Day 1, native Dutch listeners listened to the speech of a Hebrew learner of Dutch while performing a phoneme monitoring task that did not depend on the talker’s accent. In particular, shortening of the long vowel /i/ into /ɪ/ (e.g., lief [li:f], ‘sweet’, pronounced as [lɪf]) was examined. These mispronunciations did not create lexical ambiguities in Dutch. On Day 2, listeners participated in a cross-modal priming task to test their comprehension of the accent. The results will be contrasted with results from an experiment without delayed testing and related to accounts of how listeners maintain adaptation to foreign-accented speech.
  • Witteman, M. J., Bardhan, N. P., Weber, A., & McQueen, J. M. (2011). Is adaptation to foreign-accented speech long-lasting?. Poster presented at the 13th Winter Conference of the Dutch Psychonomic Society, Egmond aan Zee, Netherlands.
  • Betta, A. M. D., McQueen, J. M., & Weber, A. (2010). Adaptation to Italian-accented English: A comparison of native and nonnative listeners. Poster presented at Psycholinguistic approaches to speech recognition in adverse conditions, Bristol.
  • Cutler, A., El Aissati, A., Hanulikova, A., & McQueen, J. M. (2010). Effects on speech parsing of vowelless words in the phonology. Talk presented at 12th Conference on Laboratory Phonology. University of New Mexico in Albuquerque, NM. 2010-07-08 - 2010-07-10.
  • Mitterer, H., McQueen, J. M., Bosker, H. R., & Poellmann, K. (2010). Adapting to phonological reduction: Tracking how learning from talker-specific episodes helps listeners recognize reductions. Talk presented at the 5th annual meeting of the Schwerpunktprogramm (SPP) 1234/2: Phonological and phonetic competence: between grammar, signal processing, and neural activity. München, Germany.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2010). Rapid and long-lasting adaptation to foreign-accented speech. Poster presented at The 160th Meeting of the Acoustical Society of America (ASA), Cancún, Mexico.
  • Andics, A., McQueen, J. M., Petersson, K. M., Gál, V., & Vidnyánszky, Z. (2009). Neural correlates of voice category learning - An audiovisual fMRI study. Poster presented at 12th Meeting of the Hungarian Neuroscience Society, Budapest.

    Abstract

    Voices in the auditory modality, like faces in the visual modality, are the keys to person recognition. This fMRI experiment investigated the neural organisation of voice categories using a voice-training paradigm. Voice-morph continua were created between two female Hungarian speakers' voices saying six monosyllabic Hungarian words, one continuum per word. Listeners were trained to categorize the middle part of the continua as one voice. This trained voice category was associated with a face. Twenty-five listeners were tested twice with a one-week delay. To induce shifts in the trained category, listeners received feedback on their judgments such that the trained category was associated with different voice-morph intervals each week, allowing within-subject manipulation of whether stimuli corresponded to a trained voice-category centre, to a category boundary or to another voice. FMRI tests each week were preceded by eighty minutes training distributed over two consecutive days. The tests included implicit and explicit categorization tasks. Voice and face selective areas were defined in separate localizer runs. Group-averaged local maxima from these runs were used for small-volume correction analyses. During implicit categorization, stimuli corresponding to trained voice-category centres elicited lower activity than other stimuli in voice-selective regions of the right STS. During explicit categorization, stimuli corresponding to trained voice-category boundaries elicited higher activity than other stimuli in voice-selective regions of the right VLPFC. Furthermore, the unimodal presentation of voices that are more associated with a face may elicit higher activity in visual areas. These results map out the way voice categories are neurally represented.
  • Di Betta, A. M., Weber, A., & McQueen, J. M. (2009). Trick or treat? Adaptation to Italian-accented English speech by native English, Italian, and Dutch listeners. Poster presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009), Barcelona.

    Abstract

    English is spoken worldwide by both native (L1) and nonnative (L2) speakers. It is therefore imperative to establish how easily L1 and L2 speakers understand each other. We know that L1 listeners adapt to foreign-accented speech very rapidly (Clarke & Garrett, 2004), and L2 listeners find L2 speakers (from matched and mismatched L1 backgrounds) as intelligible as native speakers (Bent & Bradlow, 2003). But foreign-accented speech can deviate widely from L1 pronunciation norms, for example when adult L2 learners experience difficulties in producing L2 phonemes that are not part of their native repertoire (Strange, 1995). For instance, Italian L2 learners of English often lengthen the lax English vowel /I/, making it sound more like the tense vowel /i/ (Flege et al., 1999). This blurs the distinction between words such as bin and bean. Unless listeners are able to adapt to this kind of pronunciation variance, it would hinder word recognition by both L1 and L2 listeners (e.g., /bin/ could mean either bin or bean). In this study we investigate whether Italian-accented English interferes with on-line word recognition for native English listeners and for nonnative English listeners, both those where the L1 matches the speaker accent (i.e., Italian listeners) and those with an L1 mismatch (i.e., Dutch listeners). Second, we test whether there is perceptual adaptation to the Italian-accented speech during the experiment in each of the three listener groups. Participants in all groups took part in the same cross-modal priming experiment. They heard spoken primes and made lexical decisions to printed targets, presented at the acoustic offset of the prime. The primes, spoken by a native Italian, consisted of 80 English words, half with /I/ in their standard pronunciation but mispronounced with an /i/ (e.g., trick spoken as treek), and half with /i/ in their standard pronunciation and pronounced correctly (e.g., treat). These words also appeared as targets, following either a related prime (which was either identical, e.g., treat-treat, or mispronounced, e.g., treek-trick) or an unrelated prime. All three listener groups showed identity priming (i.e., faster decisions to treat after hearing treat than after an unrelated prime), both overall and in each of the two halves of the experiment. In addition, the Italian listeners showed mispronunciation priming (i.e., faster decisions to trick after hearing treek than after an unrelated prime) in both halves of the experiment, while the English and Dutch listeners showed mispronunciation priming only in the second half of the experiment. These results suggest that Italian listeners, prior to the experiment, have learned to deal with Italian-accented English, and that English and Dutch listeners, during the experiment, can rapidly adapt to Italian-accented English. For listeners already familiar with a particular accent (e.g., through their own pronunciation), it appears that they have already learned how to interpret words with mispronounced vowels. Listeners who are less familiar with a foreign accent can quickly adapt to the way a particular speaker with that accent talks, even if that speaker is not talking in the listeners’ native language.
  • Huettig, F., & McQueen, J. M. (2009). AM radio noise changes the dynamics of spoken word recognition. Talk presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009). Barcelona, Spain. 2009-09-09.

    Abstract

    Language processing does not take place in isolation from the sensory environment. Listeners are able to recognise spoken words in many different situations, ranging from carefully articulated and noise-free laboratory speech, through casual conversational speech in a quiet room, to degraded conversational speech in a busy train-station. For listeners to be able to recognize speech optimally in each of these listening situations, they must be able to adapt to the constraints of each situation. We investigated this flexibility by comparing the dynamics of the spoken-word recognition process in clear speech and speech disrupted by radio noise. In Experiment 1, Dutch participants listened to clearly articulated spoken Dutch sentences which each included a critical word while their eye movements to four visual objects presented on a computer screen were measured. There were two critical conditions. In the first, the objects included a cohort competitor (e.g., parachute, “parachute”) with the same onset as the critical spoken word (e.g., paraplu, “umbrella”) and three unrelated distractors. In the second condition, a rhyme competitor (e.g., hamer, “hammer”) of the critical word (e.g., kamer, “room”) was present in the display, again with three distractors. To maximize competitor effects pictures of the critical words themselves were not present in the displays on the experimental trials (e.g.,there was no umbrella in the display with the 'paraplu' sentence) and a passive listening task was used (Huettig McQueen, 2007). Experiment 2 was identical to Experiment 1 except that phonemes in the spoken sentences were replaced with radio-signal noises (as in AM radio listening conditions). In each sentence, two,three or four phonemes were replaced with noises. The sentential position of these replacements was unpredictable, but the adjustments were always made to onset phonemes. The critical words (and the immediately surrounding words) were not changed. The question was whether listeners could learn that, under these circumstances, onset information is less reliable. We predicted that participants would look less at the cohort competitors (the initial match to the competitor is less good) and more at the rhyme competitors (the initial mismatch is less bad). We observed a significant experiment by competitor type interaction. In Experiment 1 participants fixated both kinds competitors more than unrelated distractors, but there were more and earlier looks to cohort competitors than to rhyme competitors (Allopenna et al., 1998). In Experiment 2 participants still fixated cohort competitors more than rhyme competitors but the early cohort effect was reduced and the rhyme effect was stronger and occurred earlier. These results suggest that AM radio noise changes the dynamics of spoken word recognition. The well-attested finding of stronger reliance on word onset overlap in speech recognition appears to be due in part to the use of clear speech in most experiments. When onset information becomes less reliable, listeners appear to depend on it less. A core feature of the speech-recognition system thus appears to be its flexibility. Listeners are able to adjust the perceptual weight they assign to different parts of incoming spoken language.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2009). Speaking rate context affects online word segmentation: Evidence from eye-tracking. Talk presented at "Speech perception and production in the brain" Summer Workshop of the Dutch Phonetic Society (NVFW). Leiden, the Netherlands. 2009-06-05.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2009). Speaking rate modulates lexical competition in online speech perception. Poster presented at 157th Meeting of the Acoustical Society of America, Portland, OR.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2009). Speaking rate modulates the perception of durational cues to lexical stress. Poster presented at 50th Annual Meeting of the Psychonomic Society, Boston, Mass.
  • Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2009). At which processing level does extrinsic speaker information influence vowel perception?. Poster presented at 158th Meeting of the Acoustical Society of America, San Antonio, Texas.

    Abstract

    The interpretation of vowel sounds depends on perceived characteristics of the speaker (e.g., average first formant (F1) frequency). A vowel between /I/ and /E/ is more likely to be perceived as /I/ if a precursor sentence indicates that the speaker has a relatively high average F1. Behavioral and electrophysiological experiments investigating the locus of this extrinsic vowel normalization are reported. The normalization effect with a categorization task was first replicated. More vowels on an /I/-/E/ continuum followed by a /papu/ context were categorized as /I/ with a high-F1 context than with a low-F1 context. Two experiments then examined this context effect in a 4I-oddity discrimination task. Ambiguous vowels were more difficult to distinguish from the /I/-endpoint if the context /papu/ had a high F1 than if it had a low F1 (and vice versa for discrimination of ambiguous vowels from the /E/-endpoint). Furthermore, between-category discriminations were no easier than within-category discriminations. Together, these results suggest that the normalization mechanism operates largely at an auditory processing level. The MisMatch Negativity (an automatically evoked brain potential) arising from the same stimuli is being measured, to investigate whether extrinsic normalization takes place in the absence of an explicit decision task.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2009). Recognizing German-accented Dutch: Does prior experience matter?. Poster presented at 12th NVP Winter Conference on Cognition, Brain, and Behaviour, Egmond aan Zee, Netherlands.
  • Witteman, M. J., Weber, A., & McQueen, J. M. (2010). The influence of short- and long-term experience on recognizing German-accented Dutch. Poster presented at the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010], York, UK.
  • Hanulikova, A., Davidson, D. J., McQueen, J. M., & Mitterer, H. (2008). Native and non-native segmentation of continuous speech. Poster presented at XXIX International Congress of Psychology [ICP 2008], Berlin.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2008). Speaking rate affects the perception of word boundaries in online speech perception. Talk presented at 14th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2008). Cambridge, UK. 2008-09-04 - 2008-09-06.
  • Sjerps, M. J., & McQueen, J. M. (2008). The role of speech-specific signal characteristics in vowel normalization. Poster presented at 156th Annual Meeting of the Acoustical Society of America, Miami, FL.

    Abstract

    Listeners adjust their vowel perception to the characteristics of a particular speaker. Six experiments investigated whether speech-specific signal characteristics influence the occurrence and amount of such normalization. Previous findings were replicated with first formant (F1) manipulations of naturally recorded speech; target sounds on a [pIt] (low F1) to [pEt] (high F1) continuum were more often labeled as [pIt] after a precursor sentence with a high F1, and more often labeled as [pEt] after one with a low F1 (Exp. 1). Normalization was also observed, though to a lesser extent, when these materials were spectrally rotated, and hence sounded unlike speech (Exp. 2). No normalization occurred when, in addition to spectral rotation, the silent intervals and pitch-movement were removed and the syllables were temporally reversed (Exp. 3), despite spectral similarity of these precursors to those in Exp. 2. Reintroducing only pitch movement (Exp. 4), or silent intervals (Exp. 5), or spectrally-rotating the stimuli back (Exp. 6), did not result in normalization, so none of these factors alone accounts for the effect's disappearance in Exp. 3. These results show that normalization is not specific to speech, but still depends on more than the overall spectral properties of the preceding acoustic context.
  • Reinisch, E., Jesse, A., & McQueen, J. M. (2007). Lexical-stress information rapidly modulates spoken-word recognition. Talk presented at Dag van de Fonetiek. Utrecht, The Netherlands. 2007-12-20 - 2007-12-20.

Share this page