James McQueen

Presentations

Displaying 1 - 13 of 13
  • Dai, B., Kösem, A., McQueen, J. M., & Hagoort, P. (2016). Pure linguistic interference during comprehension of competing speech signals. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    In certain situations, human listeners have more difficulty in understanding speech in a multi-talker environment than in the presence of non-intelligible noise. The costs of speech-in-speech masking have been attributed to informational masking, i.e. to the competing processing of the target and the distractor speech’s information. It remains unclear what kind of information is competing, as intelligible speech and unintelligible speech-like signals (e.g. reversed, noise-vocoded, and foreign speech) differ both in linguistic content and in acoustic information. Thus, intelligible speech could be a stronger distractor than unintelligible speech because it presents closer acoustic information to the target speech, or because it carries competing linguistic information. In this study, we intended to isolate the linguistic component of speech-in-speech masking and we tested its influence on the comprehension of target speech. To do so, 24 participants performed a dichotic listening task in which the interfering stimuli consisted of 4-band noise-vocoded sentences that could become intelligible through training. The experiment included three steps: first, the participants were instructed to report the clear target speech from a mixture of one clear speech channel and one unintelligible noise-vocoded speech channel; second, they were trained on the interfering noise-vocoded sentences so that they became intelligible; third, they performed the dichotic listening task again. Crucially, before and after training, the distractor speech had the same acoustic features but not the same linguistic information. We thus predicted that the distracting noise-vocoded signal would interfere more with target speech comprehension after training than before training. To control for practice/fatigue effects, we used additional 2-band noise-vocoded sentences, which participants were not trained on, as interfering signals in the dichotic listening tasks. We expected that performance on these trials would not change after training, or would change less than that on trials with trained 4-band noise-vocoded sentences. Performance was measured under three SNR conditions: 0, -3, and -6 dB. The behavioral results are consistent with our predictions. The 4-band noise-vocoded signal interfered more with the comprehension of target speech after training (i.e. when it was intelligible) compared to before training (i.e. when it was unintelligible), but only at SNR -3dB. Crucially, the comprehension of the target speech did not change after training when the interfering signals consisted of unintelligible 2-band noise-vocoded speech sounds, ruling out a fatigue effect. In line with previous studies, the present results show that intelligible distractors interfere more with the processing of target speech. These findings further suggest that speech-in-speech interference originates, to a certain extent, from the parallel processing of competing linguistic content. A magnetoencephalography study with the same design is currently being performed, to specifically investigate the neural origins of informational masking.
  • Dai, B., Kösem, A., McQueen, J. M., & Hagoort, P. (2016). Pure linguistic interference during comprehension of competing speech signals. Poster presented at the 8th Speech in Noise Workshop (SpiN), Groningen, The Netherlands.
  • Franken, M. K., Schoffelen, J.-M., McQueen, J. M., Acheson, D. J., Hagoort, P., & Eisner, F. (2016). Neural correlates of auditory feedback processing during speech production. Poster presented at New Sounds 2016: 8th International Conference on Second-Language Speech, Aarhus, Denmark.

    Abstract

    An important aspect of L2 speech learning is the interaction between speech production and perception. One way to study this interaction is to provide speakers with altered auditory feedback to investigate how unexpected auditory feedback affects subsequent speech production. Although it is generally well established that speakers on average compensate for auditory feedback perturbations, even when unaware of the manipulation, the neural correlates of responses to perturbed auditory feedback are not well understood. In the present study, we provided speakers with auditory feedback that was intermittently pitch-shifted, while we measured the speaker’s neural activity using magneto-encephalography (MEG). Participants were instructed to vocalize the Dutch vowel /e/ while they tried to match the pitch of a short tone. During vocalization, participants received auditory feedback through headphones. In half of the trials, the pitch in the feedback signal was shifted by -25 cents, starting at a jittered delay after speech onset and lasting for 500ms. Trials with perturbed feedback and control trials (with normal feedback) were in random order. Post-experiment questionnaires showed that none of the participants was aware of the pitch manipulation. Behaviorally, the results show that participants on average compensated for the auditory feedback by shifting the pitch of their speech in the opposite (upward) direction. This suggests that even though participants were not aware of the pitch shift, they automatically compensate for the unexpected feedback signal. The MEG results show a right-lateralized response to both onset and offset of the pitch perturbation during speaking. We suggest this response relates to detection of the mismatch between the predicted and perceived feedback signals, which could subsequently drive behavioral adjustments. These results are in line with recent models of speech motor control and provide further insights into the neural correlates of speech production and speech feedback processing.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2016). Neural mechanisms underlying auditory feedback processing during speech production. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-23 - 2016-11-24.

    Abstract

    Speech production is one of the most complex motor skills, and involves close interaction between perceptual and motor systems. One way to investigate this interaction is to provide speakers with manipulated auditory feedback during speech production. Using this paradigm, investigators have started to identify a neural network that underlies auditory feedback processing and monitoring during speech production. However, to date, still little is known about the neural mechanisms that underlie feedback processing. The present study set out to shed more light on the neural correlates of processing auditory feedback. Participants (N = 39) were seated in an MEG scanner and were asked to vocalize the vowel /e/continuously throughout each trial (of 4 s) while trying to match a pre-specified pitch target of 4, 8 or 11 semitones above the participants’ baseline pitch level. They received auditory feedback through ear plugs. In half of the trials, the pitch in the auditory feedback was unexpectedly manipulated (raised by 25 cents) for 500 ms, starting between 500ms and 1500ms after speech onset. In the other trials, feedback was normal throughout the trial. In a second block of trials, participants listened passively to recordings of the auditory feedback they received during vocalization in the first block. Even though none of the participants reported being aware of any feedback perturbations, behavioral responses showed that participants on average compensated for the feedback perturbation by decreasing the pitch in their vocalizations, starting at about 100ms after perturbation onset until about 100 ms after perturbation offset. MEG data was analyzed, time-locked to the onset of the feedback perturbation in the perturbation trials, and to matched time-points in the control trials. A cluster-based permutation test showed that the event-related field responses differed between the perturbation and the control condition. This difference was mainly driven by an ERF response peaking at about 100ms after perturbation onset and a larger response after perturbation offset. Both these were localized to sensorimotor cortices, with the effect being larger in the right hemisphere. These results are in line with previous reports of right-lateralized pitch processing. In the passive listening condition, we found no differences between the perturbation and the control trials. This suggests that the ERF responses were not merely driven by the pitch change in the auditory input and hence instead reflect speech production processes. We suggest the observed ERF responses in sensorimotor cortex are an index of the mismatch between the self-generated forward model prediction of auditory input and the incoming auditory signal.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2016). Neural mechanisms underlying auditory feedback processing during speech production. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Speech production is one of the most complex motor skills, and involves close interaction between perceptual and motor systems. One way to investigate this interaction is to provide speakers with manipulated auditory feedback during speech production. Using this paradigm, investigators have started to identify a neural network that underlies auditory feedback processing and monitoring during speech production. However, to date, still little is known about the neural mechanisms that underlie feedback processing. The present study set out to shed more light on the neural correlates of processing auditory feedback. Participants (N = 39) were seated in an MEG scanner and were asked to vocalize the vowel /e/continuously throughout each trial (of 4 s) while trying to match a pre-specified pitch target of 4, 8 or 11 semitones above the participants’ baseline pitch level. They received auditory feedback through ear plugs. In half of the trials, the pitch in the auditory feedback was unexpectedly manipulated (raised by 25 cents) for 500 ms, starting between 500ms and 1500ms after speech onset. In the other trials, feedback was normal throughout the trial. In a second block of trials, participants listened passively to recordings of the auditory feedback they received during vocalization in the first block. Even though none of the participants reported being aware of any feedback perturbations, behavioral responses showed that participants on average compensated for the feedback perturbation by decreasing the pitch in their vocalizations, starting at about 100ms after perturbation onset until about 100 ms after perturbation offset. MEG data was analyzed, time-locked to the onset of the feedback perturbation in the perturbation trials, and to matched time-points in the control trials. A cluster-based permutation test showed that the event-related field responses differed between the perturbation and the control condition. This difference was mainly driven by an ERF response peaking at about 100ms after perturbation onset and a larger response after perturbation offset. Both these were localized to sensorimotor cortices, with the effect being larger in the right hemisphere. These results are in line with previous reports of right-lateralized pitch processing. In the passive listening condition, we found no differences between the perturbation and the control trials. This suggests that the ERF responses were not merely driven by the pitch change in the auditory input and hence instead reflect speech production processes. We suggest the observed ERF responses in sensorimotor cortex are an index of the mismatch between the self-generated forward model prediction of auditory input and the incoming auditory signal.
  • Goriot, C., Broersma, M., Van Hout, R., McQueen, J. M., & Unsworth, S. (2017). De relatie tussen vroeg vreemdetalenonderwijs en de ontwikkeling van het fonologisch bewustzijn. Talk presented at the Grote Taaldag 2017. Utrecht, The Netherlands. 2017-02-04.
  • Goriot, C., Broersma, M., McQueen, J. M., Unsworth, S., & Van Hout, R. (2016). L1-effecten in een Engelse woordenschattaak: De PPVT-4. Talk presented at the Grote Taaldag. Utrecht, The Netherlands. 2016-02-06.
  • Goriot, C., Van Hout, R., Broersma, M., McQueen, J. M., & Unsworth, S. (2016). Is there an effect of early-English education on the development of pupils’ executive functions?. Talk presented at Psycholinguistics in Flanders (PiF 2016). Antwerp, Belgium. 2016-05-26 - 2016-05-27.
  • Goriot, C., Van Hout, R., Broersma, M., McQueen, J. M., & Unsworth, S. (2016). The relationship between early-English education and executive functions: Balance is key. Talk presented at the Conference on Multilingualism (COM 2016). Ghent, Belgium. 2016-09-11 - 2016-09-13.
  • Goriot, C., Van Hout, R., Broersma, M., Unsworth, S., & McQueen, J. M. (2016). The influence of cognates on Dutch pupils’ English vocabulary scores in the Peabody Picture Vocabulary Test. Talk presented at EuroSLA 26. Jyväskylä, Finland. 2016-08-24 - 2016-08-27.
  • Hintz, F., McQueen, J. M., & Scharenborg, O. (2016). Effects of frequency and neighborhood density on spoken-word recognition in noise: Evidence from perceptual identification in Dutch. Talk presented at the 22nd Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2016),. Bilbao, ES. 2016-09-01 - 2016-09-03.
  • Krutwig, J., Sadakata, M., Garcia-Cossio, E., Desain, P., & McQueen, J. M. (2016). Perception and production interactions in non-native speech category learning. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Reaching a native-like level in a second language includes mastering phoneme contrasts that are not distinguished in one’s mother tongue – both in perception and production. This study explores how those two domains interact in the course of learning and how behavioural changes in both listening and speaking ability are related to traceable changes in the brain. Unravelling the processes underlying speech category learning could guide the design of more efficient training methods. Production and perception processes could support each other during learning, or they could interfere with each other. Baese-Berk et al. (2010), for instance, observed delayed learning when perceptual training was combined with production practice compared to perception-only training. These results could indicate perception-production interference but could also be explained by differences in cognitive load between the two conditions. In order to disentangle the added value of production training in perceptual category learning, we systematically contrasted the combination of perceptual training with either related or unrelated production. Thirty-one native speakers of Dutch distributed between two groups participated in a 4-day high-variability training protocol on the British-English /æ/-/ε/ vowel contrast (multiple words spoken by multiple talkers). In the related production group (n=15) feedback on a perceptual categorisation task was combined with pronouncing the respective correct word on every trial, whereas it was combined with pronouncing a matched but phonologically unrelated set of words in the unrelated production group (n=16). Cognitive load was matched between groups. Pre- and post-training measurements were taken of both perceptual abilities (in an identification task, an identification task assessing category boundaries on a morphed continuum, and a discrimination task on the same continuum) and production ability (a reading-aloud task with a list of isolated words). All auditory stimulus words during the training were presented according to a classical oddball paradigm, while the electrophysiological activity was recorded continuously. This enabled us to track neural changes in auditory discrimination ability using the mismatch negativity response (MMN). Results indicate that participants’ perceptual ability significantly improved over the course of training. No significant difference in perceptual learning arose between the two groups. Measurements of the distribution of formants F1 and F2 in the words in the production task before and after training (quantified in terms of Mahalanobis distance) showed that participants in both groups significantly improved after training: the two English target vowels became acoustically more distinct. Analyses of the electrophysiological data and of the other behavioural tasks are ongoing and will be presented. The fact that participants’ perceptual ability improved similarly regardless of whether they also practiced the respective productions could be seen as evidence that the perception and production systems for non-native vowels are separate. A more likely explanation, however, is that the added value of practicing the pronunciation of the vowels might have been counteracted – especially early in training - by exposure to sub-optimal utterances as the participants listened to their own voices. In order for production practice to be beneficial for the learner, immediate and informative feedback on production outcomes might be necessary.
  • McQueen, J. M., & Meyer, A. S. (2016). Cognitive architectures [Session Chair]. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.

Share this page