Primary tabs
I'm a PhD student working in the Psychology of Language department under the supervision of dr. Andrea E. Martin and prof. dr. Antje S. Meyer.
How do we articulate thoughts into spoken sounds?
Most theories of language and speech production treat language planning and speech processes separately, therefore their interface is underspecified. In my dissertation, I bridge these domains by studying (i) how the speech motor system represents and deploys well-practiced routines (“motor chunks”) versus assembling movements on the fly; (ii) how discrete linguistic plans (phonemes and metrical structure) are sequenced during phonological planning; we will also study (iii) how this mapping interacts with phrase-level prosody. I use theoretical, behavioural, electrophysiological, and computational (neurocognitive) modelling to address these issues.
What have we learned from this research?
(i) We have replicated previous studies showing that syllable frequency facilitates speech motor planning. Frequent syllables yielded faster reaction times and distinct ERP signatures during planning. Unexpectedly, we also observed longer articulation durations for more frequent syllables; future work will aim to assess the robustness and interpretation of this effect, and to more precisely characterize the properties of motor chunks.
(ii) In ongoing work, we propose a characterization of syllabification that can bridge between existing neurocognitive models of production. We align the representational assumptions of the WEAVER framework (Roelofs, 1997, 2015) and the GODIVA model (Bohland & Guenther, 2010) and specify how segmental and metrical information interacts via the procedural system to implement phonological rules in English. This will allow us to simulate the production of multisyllabic words and serve as a first step towards simulating multi-word utterances.
Share this page