Antje Meyer

Presentations

Displaying 1 - 100 of 227
  • Akamine, S., Dingemanse, M., Meyer, A. S., & Ozyurek, A. (2023). Contextual influences on multimodal alignment in Zoom interaction. Talk presented at the 1st International Multimodal Communication Symposium (MMSYM 2023). Barcelona, Spain. 2023-04-26 - 2023-04-28.
  • Bethke, S., Meyer, A. S., & Hintz, F. (2023). Developing the individual differences in language skills (IDLaS-DE) test battery—A new tool for German. Poster presented at Psycholinguistics in Flanders (PiF 2023), Ghent, Belgium.
  • Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2023). When the beat drops – beat gestures recalibrate lexical stress perception. Talk presented at the 1st International Multimodal Communication Symposium (MMSYM 2023). Barcelona, Spain. 2023-04-26 - 2023-04-28.
  • Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2023). Beat gestures can drive recalibration of lexical stress perception. Poster presented at the 5th Phonetics and Phonology in Europe Conference (PaPE 2023), Nijmegen, The Netherlands.
  • Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2023). Beat gestures can drive recalibration of lexical stress perception. Poster presented at the Donders Poster Session 2023, Nijmegen, The Netherlands.
  • Chauvet, J., Slaats, S., Poeppel, D., & Meyer, A. S. (2023). The syllable frequency effect before and after speaking. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.

    Abstract

    Speaking requires translating concepts into a sequence of sounds. Contemporary models of language production assume that this translation involves a series of steps: from selecting the concepts to be expressed, to phonetic and articulatory encoding of the words. In addition, speakers monitor their planned output using sensorimotor predictive mechanisms. The current work concerns phonetic encoding and the speaker's monitoring of articulation. Specifically, we test whether monitoring is sensitive to the frequency of syllable-sized representations.
    We run a series of immediate and delayed syllable production experiments (repetition and reading). We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We predict distinct ERP and spatiotemporal patterns for high- vs. low-frequency syllables. Following articulation, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. Low-frequency syllables are expected to require more close monitoring, and therefore smaller N1/P2 amplitudes. The results can be important as effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations for setting sensory targets in the production of speech.
  • Chauvet, J., Slaats, S., Poeppel, D., & Meyer, A. S. (2023). The syllable frequency effect before and after speaking. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, Netherlands.

    Abstract

    Speaking requires translating concepts into a sequence of sounds. Contemporary models of language production assume that this translation involves a series of steps: from selecting the concepts to be expressed, to phonetic and articulatory encoding of the words. In addition, speakers monitor their planned output using sensorimotor predictive mechanisms. The current work concerns phonetic encoding and the speaker's monitoring of articulation. Specifically, we test whether monitoring is sensitive to the frequency of syllable-sized representations.
    We run a series of immediate and delayed syllable production experiments (repetition and reading). We exploit the syllable-frequency effect: in immediate naming, high-frequency syllables are produced faster than low-frequency syllables. The effect is thought to reflect the stronger automatization of motor plan retrieval of high-frequency syllables during phonetic encoding. We predict distinct ERP and spatiotemporal patterns for high- vs. low-frequency syllables. Following articulation, we analyse auditory-evoked N1 responses that – among other features – reflect the suppression of one's own speech. Low-frequency syllables are expected to require more close monitoring, and therefore smaller N1/P2 amplitudes. The results can be important as effects of syllable frequency stand to inform us about the tradeoff between stored versus assembled representations for setting sensory targets in the production of speech.
  • Corps, R. E., & Meyer, A. S. (2023). Repetition leads to long-term suppression of the word frequency effect. Talk presented at Psycholinguistics in Flanders (PiF 2023). Ghent, Belgium. 2023-05-29 - 2023-05-31.
  • Meyer, A. S., Schulz, F., & Hintz, F. (2023). Accounting for good enough conversational speech. Talk presented at the IndiPrag Workshop. Saarbruecken, Germany. 2023-09-18 - 2023-09-19.
  • Papoutsi, C., Tourtouri, E. N., Piai, V., Lampe, L. F., & Meyer, A. S. (2023). Fast and efficient or slow and struggling? Comparing the response times of errors and targets in speeded word production. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Schulz, F. M., Corps, R. E., & Meyer, A. S. (2023). Individual differences in the production of speech disfluencies. Poster presented at Psycholinguistics in Flanders (PiF 2023), Ghent, Belgium.
  • Schulz, F. M., Corps, R. E., & Meyer, A. S. (2023). Individual differences in the production of speech disfluencies. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Schulz, F. M., Corps, R. E., & Meyer, A. S. (2023). Individual differences in disfluency production. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.

    Abstract

    Producing spontaneous speech is challenging. It often contains disfluencies like repetitions, prolongations, silent pauses or filled pauses. Previous research has largely focused on the language-based factors (e.g., planning difficulties) underlying the production of these disfluencies. But research has also shown that some speakers are more disfluent than others. What cognitive mechanisms underlie this difference? We reanalyzed a behavioural dataset of 112 participants, who were assessed on a battery of tasks testing linguistic knowledge, processing speed, non-verbal IQ, working memory, and basic production skills and also produced six 1-minute samples of spontaneous speech (Hintz et al., 2020). We assessed the length and lexical diversity of participants’ speech and determined how often they produced silent pauses and filled pauses. We used network analysis, factor analysis and non-parametric regressions to investigate the relationship between these variables and individual differences in particular cognitive skills. We found that individual differences in linguistic knowledge or processing speed were not related to the production of disfluencies. In contrast, the proportion of filled pauses (relative to all words in the 1-minute narratives) correlated negatively with working memory capacity.
  • Slaats, S., Meyer, A. S., & Martin, A. E. (2023). Do surprisal and entropy affect delta-band signatures of syntactic processing?. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Slaats, S., Meyer, A. S., & Martin, A. E. (2023). Do surprisal and entropy affect delta-band signatures of syntactic processing?. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
  • Tourtouri, E. N., & Meyer, A. S. (2023). If you hear something (don’t) say something: A dual-EEG study on sentence processing in conversational settings. Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). No evidence for convergence to sub-phonemic F2 shifts in shadowing. Poster presented at the 20th International Congress of the Phonetic Sciences (ICPhS 2023), Prague, Czech Republic.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). The influence of contextual and talker F0 information on fricative perception. Poster presented at the 5th Phonetics and Phonology in Europe Conference (PaPE 2023), Nijmegen, The Netherlands.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2023). Listeners converge to fundamental frequency in synchronous speech. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.

    Abstract

    Convergence broadly refers to interlocutors’ tendency to progressively sound more like each other over time. Recent empirical work has used various experimental paradigms to observe convergence in voice fundamental frequency (f0). One study used stable mean f0 over trials in a synchronous speech task with manipulated (i.e., high and low) f0 conditions (Bradshaw & McGettigan, 2021). Here, we attempted to replicate this study in Dutch. First, in a reading task, participants read 40 sentences at their own pace to establish f0 baselines. Later, in a synchronous speech task, participants read 80 sentences in synchrony with a speaker whose voice was manipulated ±2st above or below (i.e., for the high and low f0 conditions, respectively) a reference mean f0 value. The reference mean f0 value and the manipulation size were obtained across multiple pre-tests. Our results revealed that the f0 manipulation significantly predicted f0 convergence in both high f0 and low f0 conditions. Furthermore, the proportion of convergers in the sample was larger than those reported by Bradshaw & McGettigan, highlighting the benefits of stimulus optimization. Our study thus provides stronger evidence that the pitch of two talkers tends to converge as they speak together.
  • van der Burght, C. L., Schipperus, L., & Meyer, A. S. (2023). Does syntactic category constrain semantic interference during sentence production? A replication of Momma et al. (2020). Poster presented at the 29th Architectures and Mechanisms for Language Processing Conference (AMLaP 2023), Donostia–San Sebastián, Spain.
  • van der Burght, C. L., & Meyer, A. S. (2023). Does syntactic category constrain semantic interference effects during sentence production? A replication of Momma et al (2020). Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.

    Abstract

    The semantic interference effect in picture naming entails longer naming latencies for pictures presented with semantically related versus unrelated distractors. One factor suggested to influence the effect is word category. However, results have been inconclusive. Momma et al. (2020) used a sentence-picture interference paradigm where the sentence context (“her singing” or “she’s singing”) disambiguated the word category (noun or verb, respectively) of distractor and target, manipulating their word category match/mismatch. Semantic interference was only found when distractor and target belonged to the same word category, suggesting that syntactic category constrains lexical competition during sentence production. Considering this important theoretical conclusion, we conducted a preregistered replication study with Dutch participants, mirroring the design of the original study. In each of 2 experiments, 60 native speakers read sentences containing sentence-final distractor words that had to be interpreted as nouns or verbs, depending on the sentence context. Subsequently, they named target action pictures as either verbs (experiment 1) or nouns (experiment 2). Results of Experiment 1 showed a main effect of relatedness, suggesting a semantic interference effect regardless of word category. We discuss differences between the original and current study results with cross-linguistic differences in (de)compositional processing and frequency of distractor forms.
  • Bai, F., Meyer, A. S., & Martin, A. E. (2022). The role of transitional probability in cortical tracking of hierarchical linguistic structures. Poster presented at the Experimental Psychology Society (EPS) Meeting, Keele, UK.
  • Bujok, R., Meyer, A. S., & Bosker, H. R. (2022). Beat gestures influence audiovisual lexical stress perception, while visible facial cues do not. Poster presented at the 35th Annual Conference on Human Sentence Processing (HSP 2022), Virtual meeting.
  • Bujok, R., Meyer, A. S., & Bosker, H. R. (2022). Visible lexical stress cues on the face do not influence audiovisual speech perception. Talk presented at Speech Prosody 2022. Lisbon, Portugal. 2022-05-23 - 2022-05-26.
  • Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2022). Do manual beat gestures recalibrate the perception of lexical stress?. Talk presented at the Psychonomic Society - 63rd Annual Meeting. Boston, USA. 2022-11-17 - 2022-11-20.
  • Bujok, R., Meyer, A. S., & Bosker, H. R. (2022). Not all visual cues to lexical stress affect audiovisual speech perception: beat gestures vs. articulatory cues. Poster presented at IMPRS Conference 2022, Virtual meeting.
  • Bujok, R., Peeters, D., Meyer, A. S., & Bosker, H. R. (2022). Recalibration of lexical stress perception can be driven by visual beat gestures. Talk presented at the Dag van de Fonetiek 2022. Utrecht, NL. 2022-12-16 - 2022-12-16.
  • Hintz, F., Voeten, C. C., McQueen, J. M., & Meyer, A. S. (2022). Quantifying the relationships between linguistic experience, general cognitive skills and linguistic processing skills. Talk presented at the 44th Annual Meeting of the Cognitive Science Society (CogSci 2022). Toronto, Canada. 2022-07-27 - 2022-07-30.
  • Hintz, F., McQueen, J. M., & Meyer, A. S. (2022). The principal dimensions of speaking and listening skills. Talk presented at the 22nd Conference of the European Society for Cognitive Psychology (ESCOP 2022). Lille, France. 2022-08-29 - 2022-09-01.
  • Hustá, C., Nieuwland, M. S., & Meyer, A. S. (2022). Capturing the attentional trade-off between speech planning and comprehension: Evidence from the N100. Poster presented at the IMPRS Conference 2022, Nijmegen, the Netherlands.
  • Hustá, C., Nieuwland, M. S., & Meyer, A. S. (2022). Electrophysiological signatures of speech planning during comprehension. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
  • Meyer, A. S. (2022). The art of conversation [Broadbent lecture]. Talk presented at the 22nd Conference of the European Society for Cognitive Psychology (ESCOP 2022). Lille, France. 2022-08-29 - 2022-09-01.
  • Meyer, A. S. (2022). Timing in conversation. Talk presented at the Symposium of the Social Sciences and Humanities Section of the Max Planck Society. Berlin, Germany. 2022-06-21.
  • He, J., Meyer, A. S., Creemers, A., & Brehm, L. (2022). How to conduct language production research online: A web-based study of semantic context and name agreement effects in multi-word production. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
  • Slaats, S., Weissbart, H., Schoffelen, J.-M., Meyer, A. S., & Martin, A. E. (2022). Sentential embedding modulates the low-frequency neural response to words. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
  • Takashima, A., Hintz, F., McQueen, J. M., Meyer, A. S., & Hagoort, P. (2022). The neuronal underpinnings of variability in language skills. Talk presented at the 22nd Conference of the European Society for Cognitive Psychology (ESCOP 2022). Lille, France. 2022-08-29 - 2022-09-01.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2022). Both contextual and talker-bound F0 information affect voiceless fricative perception. Talk presented at De Dag van de Fonetiek. Utrecht, The Netherlands. 2022-12-16.
  • Bujok, R., Meyer, A. S., & Bosker, H. R. (2021). Lexical stress perception is influenced by seeing a talker’s gesture, but not face. Talk presented at the 19th Annual Auditory Perception, Cognition and Action Meeting (APCAM 2021). Virtual meeting. 2021-11-04.
  • Bujok, R., Meyer, A. S., & Bosker, H. R. (2022). The role of visual articulatory vs. gestural cues in audiovisual lexical stress perception. Talk presented at DGfS-Workshop: Visual Communication. New Theoretical and Empirical Developments (ViCom 2022). Virtual meeting. 2022-02-23 - 2022-02-25.
  • Creemers, A., & Meyer, A. S. (2021). Depth of processing influences referential ambiguity resolution. Poster presented at the 27th Architectures and Mechanisms for Language Processing Conference (AMLaP 2021), (virtual conference).
  • Hintz, F., Wolf, M. C., Rowland, C. F., & Meyer, A. S. (2021). Evidence for shared knowledge and access processes across comprehension and production: Literacy enhances spoken word comprehension and word production. Poster presented at the 27th Architectures and Mechanisms for Language Processing Conference (AMLaP 2021), Paris, France.
  • Hintz, F., Voeten, C. C., Isakoglou, C., McQueen, J. M., & Meyer, A. S. (2021). Individual differences in language ability: Quantifying the relationships between linguistic experience, general cognitive skills and linguistic processing skills. Talk presented at the 34th Annual CUNY Conference on Human Sentence Processing (CUNY 2021). Philadelphia, USA. 2021-03-04 - 2021-03-06.
  • He, J., Meyer, A. S., Creemers, A., & Brehm, L. (2021). Lexical selection in spoken production: A web-based study of the effects of semantic context and name agreement in multi-word production. Poster presented at the 27th Architectures and Mechanisms for Language Processing Conference (AMLaP 2021), (virtual conference).
  • Slaats, S., Weissbart, H., Schoffelen, J.-M., Meyer, A. S., & Martin, A. E. (2021). Sentences modulate the low-frequency neural encoding of words. Poster presented at the 27th Architectures and Mechanisms for Language Processing Conference (AMLaP 2021), (virtual conference).
  • Tourtouri, E. N., & Meyer, A. S. (2021). Ordering adjectives with(out) restrictions. Poster presented at the 27th Architectures and Mechanisms for Language Processing Conference (AMLaP 2021), Paris, France.
  • Tourtouri, E. N., & Meyer, A. S. (2021). Verbs that are produced late are accessed early: Evidence from Dutch present perfect. Poster presented at the 27th Architectures and Mechanisms for Language Processing Conference (AMLaP 2021), Paris, France.
  • Bosker, H. R., Meyer, A. S., & Maslowski, M. (2020). When speech cues are not integrated immediately: Evidence from the global speech rate effect. Poster presented at the 26th Architectures and Mechanisms for Language Processing Conference (AMLap 2020), Potsdam, Germany.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2020). Communicative intentions influence memory for conversations. Poster presented at the 26th Architectures and Mechanisms for Language Processing Conference (AMLap 2020), Potsdam, Germany.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2020). Answers are remembered better than the questions themselves. Poster presented at the Experimental Psychology Society (EPS) Meeting, Kent, Canterbury.

    Abstract

    When we communicate, we often use language to identify and successfully transmit new information. We can highlight new and important information by focussing it through pitch, syntactic structure, or semantic content. Previous work has shown that focussed information is remembered better than neutral or unfocussed information. However, most of this work has used structures, like clefts and pseudo-clefts, that are rarely found in communication. We used spoken question-answer pairs, a frequent structure where the answers are focussed relative to the questions, to examine whether answers are remembered better than questions. On each trial, participants (n=48) saw three pictures on the screen while listening to a recorded question-answer exchange between two people, such as “What should move under the crab? – The sunflower!”. In an online Yes/No recognition memory test on the next day, participants recognised the names of pictures that appeared as answers 6% more accurately than the names of pictures that appeared as questions (β = 0.27, Wald z = 4.51, 95% CI = 0.15, 0.39, p = < 0.001). Thus, linguistic focus affected memory for the words of an overheard conversation. We discuss the methodological and theoretical implications of the findings for studies of conversation.

    Additional information

    https://osf.io/w72r4/
  • Alday, P. M., & Meyer, A. S. (2019). Conversation as a competitive sport. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2019). Moscow, Russia. 2019-09-06 - 2019-09-08.
  • Bartolozzi, F., Jongman, S. R., & Meyer, A. S. (2019). Divided attention from speech-planning does not eliminate repetition priming from spoken words: Evidence from a dual-task paradigm. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
  • Brehm, L., & Meyer, A. S. (2019). Coordinating speech in conversation relies on expectations of timing and content. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
  • Favier, S., Meyer, A. S., & Huettig, F. (2019). Does literacy predict individual differences in syntactic processing?. Talk presented at the International Workshop on Literacy and Writing systems: Cultural, Neuropsychological and Psycholinguistic Perspectives. Haifa, Israel. 2019-02-18 - 2019-02-20.
  • Favier, S., Wright, A., Meyer, A. S., & Huettig, F. (2019). Proficiency modulates between- but not within-language structural priming. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
  • De Heer Kloots, M., Raviv, L., & Meyer, A. S. (2019). Memory and generalization: How do group size, structure and learnability relate in lab-evolved artificial languages?. Talk presented at the Culture Conference 2019: Communication in Culture. Stirling, UK. 2019-07-01 - 2019-07-02.
  • Hintz, F., Jongman, S. R., Dijkhuis, M., Van 't Hoff, V., McQueen, J. M., & Meyer, A. S. (2019). Assessing individual differences in language processing: A novel research tool. Talk presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019). Tenerife, Spain. 2019-09-25 - 2019-09-28.

    Abstract

    Individual differences in language processing are prevalent in our daily lives. However, for decades, psycholinguistic research has largely ignored variation in the normal range of abilities. Recently, scientists have begun to acknowledge the importance of inter-individual variability for a comprehensive characterization of the language system. In spite of this change of attitude, empirical research on individual differences is still sparse, which is in part due to the lack of a suitable research tool. Here, we present a novel battery of behavioral tests for assessing individual differences in language skills in younger adults. The Dutch prototype comprises 29 subtests and assesses many aspects of language knowledge (grammar and vocabulary), linguistic processing skills (word and sentence level) and general cognitive abilities involved in using language (e.g., WM, IQ). Using the battery, researchers can determine performance profiles for individuals and link them to neurobiological or genetic data.
  • Kaufeld, G., Bosker, H. R., Alday, P. M., Meyer, A. S., & Martin, A. E. (2019). A timescale-specific hierarchy in cortical oscillations during spoken language comprehension. Poster presented at Language and Music in Cognition: Integrated Approaches to Cognitive Systems (Spring School 2019), Cologne, Germany.
  • Kaufeld, G., Bosker, H. R., Alday, P. M., Meyer, A. S., & Martin, A. E. (2019). Structure and meaning entrain neural oscillations: A timescale-specific hierarchy. Poster presented at the 26th Annual meeting of the Cognitive Neuroscience Society (CNS 2019), San Francisco, CA, USA.
  • Meyer, A. S. (2019). A cognitive psychologist’s view of conversation. Talk presented at the Institute of Language, Cognition, and the Brain. Aix Marseille, France. 2019-04-26.
  • Meyer, A. S., & Jongman, S. R. (2019). Why conversations are easy to hold and hard to study [keynote]. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2019). Moscow, Russia. 2019-09-06 - 2019-09-08.
  • Meyer, A. S. (2019). Towards processing theories of conversation. Talk presented at the Leiden University Centre for Linguistics. Leiden, The Netherlands. 2019-06-07.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Input variability promotes the emergence of linguistic structure. Poster presented at the Inaugural workshop of the Center for the Interdisciplinary Study of Language Evolution (ISLE), Zürich, Switzerland.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Cognitive Science Department Colloquium Series, Haifa University. Haifa, Israel. 2019-04-07.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Language, Memory, and Attention group, Cognitive Department Colloquium Series, Royal Holloway, University of London. London, UK. 2019-06-20.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2019). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Psychology Department, Hebrew University of Jerusalem. Jerusalem, Israel. 2019-04-04.
  • Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Bosch, L. t. (2019). The speech production system is reconfigured to change speaking rate. Poster presented at the 3rd Phonetics and Phonology in Europe conference (PaPe 2019), Lecce, Italy.
  • Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Bosch, L. t. (2019). The speech production system is reconfigured to change speaking rate. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.

    Abstract

    It is evident that speakers can freely vary stylistic features of their speech, such as speech rate, but how they accomplish this has hardly been studied, let alone implemented in a formal model of speech production. Much as in walking and running, where qualitatively different gaits are required cover the gamut of different speeds, we might predict there to be multiple qualitatively distinct configurations, or ‘gaits’, in the speech planning system that speakers must switch between to alter their speaking rate or style. Alternatively, control might involve continuous modulation of a single ‘gait’. We investigate these possibilities by simulation of a connectionist computational model which mimics the temporal characteristics of observed speech. Different ‘regimes’ (combinations of parameter settings) can be engaged to achieve different speaking rates.

    The model was trained separately for each speaking rate, by an evolutionary optimisation algorithm. The training identified parameter values that resulted in the model to best approximate syllable duration distributions characteristic of each speaking rate.

    In one gait system, the regimes used to achieve fast and slow speech are qualitatively similar, but quantitatively different. In parameter space, they would be arranged along a straight line. Different points along this axis correspond to different speaking rates. In a multiple gait system, this linearity would be missing. Instead, the arrangement of the regimes would be triangular, with no obvious relationship between the regions associated with each gait, and an abrupt shift in parameter values to move from speeds associated with ‘walk-speaking’ to ‘run-speaking’.

    Our model achieved good fits in all three speaking rates. In parameter space, the arrangement of the parameter settings selected for the different speaking rates is non-axial, suggesting that ‘gaits’ are present in the speech planning system.
  • San Jose, A., Roelofs, A., & Meyer, A. S. (2019). Lapses of attention explain the distributional dynamics of semantic interference in word production: Evidence from computational simulations. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.
  • Van Paridon, J., Roelofs, A., & Meyer, A. S. (2019). Contextual priming in shadowing and simultaneous translation. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
  • Wolf, M. C., Smith, A. C., Rowland, C. F., & Meyer, A. S. (2019). Effects of modality on learning novel word - picture associations. Talk presented at the Experimental Psychology Society London Meeting. London, UK. 2019-01-03 - 2019-01-04.

    Abstract

    It is unknown whether modality affects the efficiency with which we learn novel word forms and their meanings. In this study, 60 participants were trained on 24 pseudowords, each paired with a pictorial meaning (novel object). Following a 20 minute filler task participants were tested on their ability to identify the picture-word form pairs on which they were trained when presented amongst foils. Word forms were presented in either their written or spoken form, with exposure to the written form equal to the speech duration of the spoken form. The between subjects design generated four participant groups 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. Our results show a written training advantage: participants trained on written words were more accurate on the matching task. An ongoing follow-up experiment tests whether the written advantage is caused by additional time with the full word form, given that words can be read faster than the time taken for the spoken form to unfold. To test this, in training, written words were presented with sufficient time for participants to read, yet maximally half the duration of the spoken form in experiment 1.
  • Wolf, M. C., Smith, A. C., Rowland, C. F., & Meyer, A. S. (2019). Modality effects in novel picture-word form associations. Poster presented at Crossing the Boundaries: Language in Interaction Symposium, Nijmegen, The Netherlands.

    Abstract

    It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials.
  • Wolf, M. C., Smith, A. C., Meyer, A. S., & Rowland, C. F. (2019). Modality effects in vocabulary acquisition. Talk presented at the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019). Montreal, Canada. 2019-07-24 - 2019-07-27.

    Abstract

    It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials.
  • Zormpa, E., Meyer, A. S., & Brehm, L. (2019). Naming pictures slowly facilitates memory for their names. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.

    Abstract

    Studies on the generation effect have found that coming up with words, compared to reading them, improves memory. However, because these studies used words at both study and test, it is unclear whether generation affects visual or conceptual/lexical representations. Here, participants named pictures after hearing the picture name (no-generation condition), backward speech, or an unrelated word (easy and harder generation conditions). We ruled out effects at the visual level by testing participants’ recognition memory on the written names of the pictures that were named earlier. We also assessed the effect of processing time during generation on memory. In the recognition memory test, participants were more accurate in the generation conditions than in the no-generation condition. They were also more accurate for words that took longer to be retrieved, but only when generation was required. This work shows that generation affects conceptual/lexical representations and informs the relationship between language and memory.
  • Araújo, S., Konopka, A. E., Meyer, A. S., Hagoort, P., & Weber, K. (2018). Effects of verb position on sentence planning. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
  • Fairs, A., Bögels, S., & Meyer, A. S. (2018). Serial or parallel dual-task language processing: Production planning and comprehension are not carried out in parallel. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
  • Favier, S., Meyer, A. S., & Huettig, F. (2018). Does literacy predict individual differences in the syntactic processing of spoken language?. Poster presented at the 1st Workshop on Cognitive Science of Culture, Lisbon, Portugal.
  • Favier, S., Meyer, A. S., & Huettig, F. (2018). Does reading ability predict individual differences in spoken language syntactic processing?. Poster presented at the International Meeting of the Psychonomics Society 2018, Amsterdam, The Netherlands.
  • Favier, S., Meyer, A. S., & Huettig, F. (2018). How does literacy influence syntactic processing in spoken language?. Talk presented at Psycholinguistics in Flanders (PiF 2018). Gent, Belgium. 2018-06-04 - 2018-06-05.
  • Hintz, F., Jongman, S. R., McQueen, J. M., & Meyer, A. S. (2018). Individual differences in word production: Evidence from students with diverse educational backgrounds. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
  • Hintz, F., Jongman, S. R., Dijkhuis, M., Van 't Hoff, V., Damian, M., Schröder, S., Brysbaert, M., McQueen, J. M., & Meyer, A. S. (2018). STAIRS4WORDS: A new adaptive test for assessing receptive vocabulary size in English, Dutch, and German. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2018), Berlin, Germany.
  • Hintz, F., Jongman, S. R., McQueen, J. M., & Meyer, A. S. (2018). Verbal and non-verbal predictors of word comprehension and word production. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2018), Berlin, Germany.
  • Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2018). Evidence for in-group biases in source memory for newly learned words. Poster presented at the International Conference on Learning and Memory (LearnMem 2018), Huntington Beach, CA, USA.
  • Jongman, S. R., Piai, V., & Meyer, A. S. (2018). Withholding speech: Does the EEG signal reflect planning for production or attention?. Poster presented at the 31st Annual CUNY Conference on Human Sentence Processing, Davis, CA, USA.
  • Mainz, N., Smith, A. C., & Meyer, A. S. (2018). Individual differences in word learning - An exploratory study of adult native speakers. Talk presented at the Experimental Psychology Society London Meeting. London, UK. 2018-01-03 - 2018-01-05.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Do effects of habitual speech rate normalization on perception extend to self?. Talk presented at Psycholinguistics in Flanders (PiF 2018). Ghent, Belgium. 2018-06-04 - 2018-06-05.

    Abstract

    Listeners are known to use contextual speech rate in processing temporally ambiguous speech sounds. For instance, a fast adjacent speech context makes a vowel sound relatively long, whereas a slow context makes it sound relatively short (Reinisch & Sjerps, 2013). Besides the local contextual speech rate, listeners also track talker-specific habitual speech rates (Reinisch, 2016; Maslowski et al., in press). However, effects of one’s own speech rate on the perception of another talker’s speech are yet unexplored. Such effects are potentially important, given that, in dialogue, a listener’s own speech often constitutes the context for the interlocutor’s speech. Three experiments tested the contribution of self-produced speech on perception of the habitual speech rate of another talker. In Experiment 1, one group of participants was instructed to speak fast (high-rate group), whereas another group had to speak slowly (low-rate group; 16 participants per group). The two groups were compared on their perception of ambiguous Dutch /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech, whilst evaluating target vowels in neutral rate speech as before. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 with a new participant sample, who did not know the participants from the previous two experiments. Here, a group effect was found on perception of the neutral rate talker. This result replicates the finding of Maslowski et al. that habitual speech rates are perceived relative to each other (i.e., neutral rate sounds fast in the presence of a slower talker and vice versa), with naturally produced speech. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the perceptual and cognitive mechanisms involved in rate-dependent speech perception and the link between production and perception in dialogue settings.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). How speech rate normalization affects lexical access. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2018). Berlin, Germany. 2018-09-06 - 2018-09-08.
  • Maslowski, M., Meyer, A. S., & Bosker, H. R. (2018). Self-produced speech rate is processed differently from other talkers' rates. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.

    Abstract

    Interlocutors perceive phonemic category boundaries relative to talkers’ produced speech rates. For instance, a temporally ambiguous vowel between Dutch short /A/ and long /a:/ sounds short (i.e., as /A/) in a slow speech context, but long in a fast context. Besides the local contextual speech rate, listeners also track talker-specific habitual speech rates (Maslowski et al., in press). However, it is yet unclear whether self-produced speech rate modulates perception of another talker’s habitual rate. Such effects are potentially important, given that, in dialogue, a listener’s own speech often constitutes the context for the interlocutor’s speech. Three experiments addressed this question. In Experiment 1, one group of participants was instructed to speak fast, whereas another group had to speak slowly (16 participants per group). The two groups were then compared on their perception of ambiguous Dutch /A/-/a:/ vowels embedded in neutral rate speech from another talker. In Experiment 2, the same participants listened to playback of their own speech, whilst evaluating target vowels in neutral rate speech as before. Neither of these experiments provided support for the involvement of self-produced speech in perception of another talker's speech rate. Experiment 3 repeated Experiment 2 with a new participant sample, who were unfamiliar with the participants from the previous two experiments. Here, a group effect was found on perception of the neutral rate talker. This result replicates the finding of Maslowski et al. that habitual speech rates are perceived relative to each other (i.e., neutral rate sounds fast in the presence of a slower talker and vice versa), with naturally produced speech. Taken together, the findings show that self-produced speech is processed differently from speech produced by others. They carry implications for our understanding of the link between production and perception in dialogue.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Network structure and the cultural evolution of linguistic structure: An artificial language study. Talk presented at the Cultural Evolution Society Conference (CES 2018). Tempe, AZ, USA. 2018-10-22 - 2018-10-24.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Linguistics Department Colloquium Series. University of Arizona, Tuscon, AZ, USA. 2018-10-26.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Language evolution seminar, Center for Language evolution, University of Edinburgh. Edinburgh, UK. 2018-08-21.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Cohn Institute for the History and Philosophy of Science, Tel Aviv University. Tel Aviv, Israel. 2018-12-23.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Department of Linguistics, Tel Aviv University. Tel Aviv, Israel. 2018-12-25.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). Social structure affects the emergence of linguistic structure: Experimental evidence. Talk presented at the Donders Discussions 2018. Nijmegen, The Netherlands. 2018-10-11.
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). The role of community size in the emergence of linguistic structure. Talk presented at the 12th International Conference on the Evolution of Language: (EVOLANG XII). Torun, Poland. 2018-04-15 - 2018-04-19.
  • Rodd, J., Bosker, H. R., Meyer, A. S., Ernestus, M., & Ten Bosch, L. (2018). How to speed up and slow down: Speaking rate control to the level of the syllable. Talk presented at the New Observations in Speech and Hearing seminar series, Institute of Phonetics and Speech processing, LMU Munich. Munich, Germany.
  • Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Ten Bosch, L. (2018). Run-speaking? Simulations of rate control in speech production. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2018), Berlin, Germany.
  • Rodd, J., Bosker, H. R., Ernestus, M., Meyer, A. S., & Ten Bosch, L. (2018). Running or speed-walking? Simulations of speech production at different rates. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.

    Abstract

    That speakers can vary their speaking rate is evident, but how they accomplish this has
    hardly been studied. The effortful experience of deviating from one's preferred speaking rate
    might result from shifting between different regimes (system configurations) of the speech
    planning system. This study investigates control over speech rate through simulations of a
    new connectionist computational model of the cognitive process of speech production, derived
    from Dell, Burger and Svec’s (1997) model to fit the temporal characteristics of observed
    speech. We draw an analogy from human movement: the selection of walking and running
    gaits to achieve different movement speeds. Are the regimes of the speech production system
    arranged into multiple ‘gaits’ that resemble walking and running?
    During training of the model, different parameter settings are identified for different speech
    rates, which can be conflated with the regimes of the speech production system. The
    parameters can be considered to be dimensions of a high-dimensional ‘regime space’, in
    which different regimes occupy different parts of the space.
    In a single gait system, the regimes are qualitatively similar, but quantitatively different.
    They are arranged along a straight line through regime space. Different points along this axis
    correspond directly to different speaking rates. In a multiple gait system, the arrangement of
    the regimes is more disperse, with no obvious relationship between the regions associated
    with each gait.
    After training, the model achieved good fits in all three speaking rates, and the parameter
    settings associated with each speaking rate were different. The broad arrangement of the
    parameter settings for the different speaking rates in regime space was non-axial, suggesting
    that ‘gaits’ may be present in the speech planning system.
  • Rodd, J., Bosker, H. R., Ernestus, M., Ten Bosch, L., & Meyer, A. S. (2018). To speed up, turn up the gain: Acoustic evidence of a 'gain-strategy' for speech planning in accelerated and decelerated speech. Poster presented at LabPhon16 - Variation, development and impairment: Between phonetics and phonology, Lisbon, Portugal.
  • Takashima, A., Meyer, A. S., Hagoort, P., & Weber, K. (2018). Lexical and syntactic memory representations for sentence production: Effects of lexicality and verb arguments. Poster presented at the International Workshop on Language Production (IWLP 2018), Nijmegen, The Netherlands.
  • Takashima, A., Meyer, A. S., Hagoort, P., & Weber, K. (2018). Producing sentences in the MRI scanner: Effects of lexicality and verb arguments. Poster presented at the Tenth Annual Meeting of the Society for the Neurobiology of Language (SNL 2018), Quebec, Canada.

Share this page