Presentations

Displaying 1 - 8 of 8
  • Maslowski, M., & Rodd, J. (2019). Speech rate variation: How to perceive fast and slow speech, and how to speed up and slow down in speech production. Talk presented at the ACLC Seminar. Amsterdam, The Netherlands. 2019-04-26.

    Abstract

    Speech rate is one of the more salient stylistic dimensions along which speech can vary. We present both sides of this story: how listeners make use of this variation to optimise speech perception, and how the speech production system is modulated to produce speech at different rates.

    Listeners take speech rate variation into account by normalizing vowel duration or contextual speech rate: an ambiguous Dutch word /m?t/ is perceived as short /mAt/ when embedded in a slow context, but long /ma:t/ in a fast context. Many have argued that rate normalization involves low-level early and automatic perceptual processing. However, prior research on rate-dependent speech perception has only used explicit recognition tasks to investigate the phenomenon, involving both perceptual processing and decision making. Speech rate effects are induced by both local adjacent temporal cues and global non-adjacent cues. In this talk, I present evidence that local rate normalization takes place, at least in part, at a perceptual level, and even in the absence of an explicit recognition task. In contrast, global effects of speech rate seem to involve higher-level cognitive adjustments, possibly taking place at a later decision-making level.

    That speakers can vary their speech rate is evident, but how they accomplish this has hardly been studied. Consider this analogy: when walking, speed can be continuously increased, within limits, but to speed up further, humans must run. Are there multiple qualitatively distinct speech 'gaits' that resemble walking and running? Or is control achieved solely by continuous modulation of a single gait? These possibilities are investigated through simulations of a new connectionist computational model of the cognitive process of speech production. The model has parameters that can be adjusted to fit the temporal characteristics of natural speech at different rates. During training, different clusters of parameter values (regimes) were identified for different speech rates. In a one gait system, the regimes used to achieve fast and slow speech are qualitatively similar, but quantitatively different. In a multiple gait system, there is no linear relationship between the parameter settings associated with each gait, resulting in an abrupt shift in parameter values to move from speaking slowly to speaking fast. After training, the model achieved good fits in all three speech rates. The parameter settings associated with each speech rate were not linearly related, suggesting the presence of cognitive gaits, and thus that speakers make use of distinct cognitive configurations for different speech rates.

    Additional information

    Link to ACLC Seminar site
  • Rodd, J. (2019). The EPONA model: Simulation of the control of speaking rate. Talk presented at the Seminar of the DFG Research Group "Spoken Morphology". Düsseldorf, Germany. 2019-03-26 - 2019-03-26.
  • Rodd, J., & Maslowski, M. (2019). Speech rate variation: How to speed up and slow down in speech production, and how to perceive fast and slow speech. Talk presented at the Experimental Linguistics Talks Utrecht (ELiTU). Utrecht, The Netherlands. 2019-04-15 - 2019-04-15.

    Abstract

    Speech rate is one of the more salient stylistic dimensions along which speech can vary. We present both sides of this story: how the speech production system is modulated to produce speech at different rates, and how listeners make use of this variation to optimise speech perception.

    Joe Rodd: Speakers switch between qualitatively different cognitive ‘gaits’ to produce speech at different rates

    That speakers can vary their speech rate is evident, but how they accomplish this has hardly been studied. Consider this analogy: when walking, speed can be continuously increased, within limits, but to speed up further, humans must run. Are there multiple qualitatively distinct speech 'gaits' that resemble walking and running? Or is control achieved solely by continuous modulation of a single gait? These possibilities are investigated through simulations of a new connectionist computational model of the cognitive process of speech production. The model has parameters that can be adjusted to fit the temporal characteristics of natural speech at different rates. During training, different clusters of parameter values (regimes) were identified for different speech rates. In a one gait system, the regimes used to achieve fast and slow speech are qualitatively similar, but quantitatively different. In a multiple gait system, there is no linear relationship between the parameter settings associated with each gait, resulting in an abrupt shift in parameter values to move from speaking slowly to speaking fast. After training, the model achieved good fits in all three speech rates. The parameter settings associated with each speech rate were not linearly related, suggesting the presence of cognitive gaits, and thus that speakers make use of distinct cognitive configurations for different speech rates.

    Merel Maslowski: Listeners use the speech rate context to tune their speech perceptions

    Listeners take speech rate variation into account by normalizing vowel duration or contextual speech rate: an ambiguous Dutch word /m?t/ is perceived as short /mAt/ when embedded in a slow context, but long /ma:t/ in a fast context. Many have argued that rate normalization involves low-level early and automatic perceptual processing. However, prior research on rate-dependent speech perception has only used explicit recognition tasks to investigate the phenomenon, involving both perceptual processing and decision making. Speech rate effects are induced by both local adjacent temporal cues and global non-adjacent cues. In this talk, I present evidence that local rate normalization takes place, at least in part, at a perceptual level, and even in the absence of an explicit recognition task. In contrast, global effects of speech rate seem to involve higher-level cognitive adjustments, possibly taking place at a later decision-making level.
  • Rodd, J., Bosker, H. R., Meyer, A. S., Ernestus, M., & Ten Bosch, L. (2018). How to speed up and slow down: Speaking rate control to the level of the syllable. Talk presented at the New Observations in Speech and Hearing seminar series, Institute of Phonetics and Speech processing, LMU Munich. Munich, Germany.
  • Terband, H., Rodd, J., & Maas, E. (2018). Testing hypotheses about the underlying deficit of Apraxia of Speech (AOS) through computational neural modelling with the DIVA model. Talk presented at Dag van de Fonetiek. Amsterdam, The Netherlands. 2018-12-21.
  • Terband, H., Rodd, J., & Maas, E. (2018). Testing hypotheses about the underlying deficit of apraxia of speech (AOS) through computational neural modelling: Effects of noise masking on vowel production in the DIVA model. Talk presented at the Madonna Motor Speech Conference. Savannah, GA, USA. 2018-02-22 - 2018-02-25.
  • Rodd, J., & Chen, A. (2016). Pitch accents show a perceptual magnet effect: Evidence of internal structure in intonation categories. Talk presented at Speech Prosody 2016. Boston, MA, USA. 2016-05-31 - 2016-06-03.
  • Terband, H., Rodd, J., & Maas, E. (2015). Simulations of feedforward and feedback control in apraxia of speech (AOS): Effects of noise masking on vowel production in the DIVA model. Talk presented at The 18th International Congress of Phonetic Sciences (ICPhS 2015). Glasgow, UK. 2015-08-10 - 2015-08-14.

    Abstract

    Apraxia of Speech (AOS) is a motor speech disorder whose precise nature is still poorly understood. A recent behavioural experiment featuring a noise masking paradigm suggests that AOS reflects a disruption of feedforward control, whereas feedback control is spared and plays a more prominent role in achieving and maintaining segmental contrasts [10]. In the present study, we set out to validate the interpretation of AOS as a feedforward impairment by means of a series of computational simulations with the DIVA model [6, 7] mimicking the behavioural experiment. Simulation results showed a larger reduction in vowel spacing and a smaller vowel dispersion in the masking condition compared to the no-masking condition for the simulated feedforward deficit, whereas the other groups showed an opposite pattern. These results mimic the patterns observed in the human data, corroborating the notion that AOS can be conceptualized as a deficit in feedforward control

Share this page