Please note NEW VENUE
Museum het Valkhof, Kelfkensbos 59, Nijmegen
Day 1 - March 26
Antje Meyer – Introduction
Discrepancies in comprehension vs. production, in acquisition and beyond
Eve V. Clark
Researchers from Brown and Bellugi (1961) on, have noted that what young children understand (their C-representations) often do not match what they say (their P-representations): They understand “fish” but say ‘fis’. I first review the extensive mismatches in phonology, with added data on final voiced stops, and mismatches in word meaning. I then focus on mismatches in novel word-formations. In one study, for example, the data show that children’s C-represen-tations for compounds like tree-cutter ‘someone who cuts trees’ are based directly on the adult forms, while their P-representations are quite different: ‘What could you call someone who makes bridges?’ at age 4 elicits ‘a make-bridge’ instead of the adult bridge-maker. Yet the same children interpret word-forms like make-bridge as denoting some kind of bridge. The mis-alignment between the P-representations and C-representations in these children points to two conclusions (a) children need to align their P-representations with the relevant subset of their C-representations in order to be able to make themselves understood, and (b) there is a lifelong asymmetry between C-and P-representations for linguistic forms, where P-representations are only ever aligned with a subset of C-representations.
Child language perception and production: shared, separate, or merging lexical representations?
It is clear that words in early child language productions often deviate from their adult targets. Do these deviating word forms result from deviating lexical representations, resulting from a deviating early perception mechanism? If lexical representations form the link between production and perception, deviating lexical representations are expected to affect both production and perception. But can we say anything at all about young children’s lexical representations, or is it impossible to determine if deviations result from immature lexical representations, from processing constraints or from a combination thereof? In this paper I will give it a try by discussing three types of child language production data: the earliest, homorganic productions, productions with omitted target codas, and productions with reduced target onset clusters. These data will be linked to available ‘homorganic’, coda, and onset cluster perception data from young children. We will discuss if taking into account both child language production and - perception data can give us more insight into the nature of (developing) lexical representations.
Effects of phonetic variation in understanding spoken words and what they tell us about linguistic representations
Research investigating the perception and recognition of spoken words has focused on phonetic and phonological patterns in speech to investigate both the nature of linguistic representations and access to these representations given a highly variable input. Across various studies, one thing is certain: Adult participants understand highly variable speech across speech styles, regional accents, and non-native accents, but produce a much more constrained set of word forms. I provide an overview of research illustrating this behavior. Then, I focus on cases of mismatch between perception and production, showing that listeners can be near-native in listening to infrequent productions of spoken words. Next, I show that listeners have highly-specific memories for particular word forms, and that over time, there is little evidence that this variable input is mapped to a single abstract word form. Finally, I suggest the data show that not all experiences with phonetic variants of a single lexical item are equal, leaving us with representations that are dense, but weakly encoded and others that are sparse, but strongly encoded. This asymmetry gives the illusion of a single representation at work in short-term recognition studies, but predicts the differences we see in longer-term studies.
The link between speech production and comprehension in the processing of pronunciation variation
Radboud University Nijmegen
In informal situations, words are often pronounced with fewer segments than in their citation forms. For instance, in casual conversations, the word yesterday may sound like yeshay. In this talk, I will discuss data from production and comprehension experiments and from corpus-based research revealing the mechanisms underlying the production and comprehension of these reduced pronunciation variants.
The data show that these variants often result from reduction in and from overlap of articulatory gestures. The comprehension mechanisms of these articulatory reductions are based, among others, on residues in the acoustic signal, on the listener's experience with the reduction patterns in the language and on the listener's inability to perceive certain acoustic differences. Importantly, listeners comprehend articulatory reductions most easily under the conditions (e.g. in those segmental contexts) in which they occur most frequently, which suggests strong links between the two processes. I will argue that the comprehension process is influenced more by the production mechanisms than vice verse.
Our experiments have also shown that pronunciation variants may have their own lexical representations, which can be used during both speech production and comprehension. I will argue that one and the same lexical representation serves for both processes.
Processes and representations in word recognition and production: Insight from pronunciation variants
University of Geneva
A key issue in psycholinguistic research concerns the cognitive mechanisms and representations underlying the processing of word forms. Studies have addressed this issue both from the listener’s and speaker’s perspectives. With very few exceptions, these studies have examined recognition and production separately. Consequently, we still know very little regarding the relationship between the two. In this talk, I address the issue of whether spoken word recognition and production tasks recruit similar or different word form representations. I do so by examining chronometric data on the processing of pronunciation variants. In the first part of the talk, I will review the existing evidence on the recognition and production of these variants and highlight the similarities and divergences between the two. In the second part of the talk, I will focus more specifically on the role of exposure on spoken word (variant) processing and ask the question of whether recognition and production processes are influenced by exactly the same -or different- frequency measures. I conclude that the available evidence suggests the recruitment of very similar but yet not identical word form representations in production and recognition tasks.
What morphology may tell us about shared representations in comprehension and production.
University of Münster
Although language production and comprehension are part of one language faculty, as is evident in individual language users, they have been studied separately for a long time. This is particularly striking since experimental paradigms often combine production and comprehension. This holds for production studies of syntactic priming, for comprehension studies using verbal naming responses, and most prominently for the picture-word paradigm, the paradigm par excellence to study lexical access in speech production. The paradigm combines pictures – the targets for speech production – with visual or spoken 'distractor' words, which are processed by the comprehension system. This presents an excellent means to study the interplay between the two language modes in more detail.
Focusing on lexical access and drawing on data accumulated over at least two decades, I will report on similarities and differences between comprehension and production concerning processing and representation at the levels of word form, lemma and semantics. I will evaluate data from priming studies with pure compre-hension tasks (lexical decision), production tasks (word naming), and from picture-naming studies with written distractors. Primes and targets, or distractors and pictures, can share semantic aspects, morphemes, and segments. The overall picture that emerges from these studies is in favour of shared representations at multiple levels, but that the processes operating on this information are different.
Production mechanisms, the distribution of sentence structures, and sentence comprehension
University of Wisconsin-Madison
How much of a speaker’s word order can be traced to the fact that speaking is a motor act? Not everything, certainly, but I will argue that some key properties of language production, and more generally properties of memory retrieval and motor planning, promote certain utterance forms over other alternatives. These biases create robust distributional regularities in the language environment of perceivers, which are learned by perceivers and used during both online language comprehension and subsequent language production. I’ll illustrate these phenomena with cross-linguistic studies of language production in adults, changing patterns of language production in childhood, and the relationship between distributional patterns of language use in corpora and patterns of language comprehension.
Day 2 - March 27
Speaking and understanding as coordinated processes
Herbert H. Clark
Speaking and understanding are asymmetrical processes: Understanding is easy, flexible, and independent of one’s ability to speak, whereas speaking is difficult, inflexible, and dependent on one’s ability to understand. This asymmetry makes sense once speaking and understanding are considered distinct skills. Understanding speech is just one member of a large family of skills in recognizing patterns. Other members include: identifying birds from their songs, plumage, or flight patterns; identifying trees, flowers, and people from their visual appearance; identifying symphonies from hearing them; and identifying authors from their style. People acquire these skills without any counterpart skills in production, and that is also true of understanding. Speaking, in contrast, is impossible without the skill of understanding. In Levelt’s model, speakers monitor their speech in order make corrections when needed. People who cannot monitor their speech (e.g., the deaf) simply cannot acquire this skill. But a speaker’s monitor is very specialized: The patterns people can monitor for in their own speech are a vanishingly small subset of the speech patterns they are able to recognize. I will take up evidence for this view and some of its consequences.
Self-, other-, and joint monitoring using forward models
University of Edinburgh
In the psychology of language, most accounts of self-monitoring assume that it is based on comprehension. Here we outline and develop the alternative account proposed by Pickering and Garrod (2013), in which speakers construct forward models of their upcoming utterances and compare them with the utterance as they produce them. We propose that speakers compute inverse models derived from the discrepancy (error) between the utterance and the predicted utterance and use that to modify their production command or (occasionally) begin anew. We then propose that comprehenders monitor other people’s speech by simulating their utterances using covert imitation and forward models, and then comparing those forward models with what they hear. They use the discrepancy to compute inverse models and modify their representation of the speaker’s production command, or realize that their representation is incorrect and may develop a new production command. We then discuss monitoring in dialogue, paying attention to sequential contributions, concurrent feedback, and the relationship between monitoring and alignment
Imagining and anticipating another speaker’s utterances in joint language tasks
Are the processes by which we imagine and anticipate that somebody is about to speak, what she is about to say, and when she is going to say it fudamentally related to the process by which we prepare to speak ourselves? I will present the results of several experiments that investigated this question using joint language production tasks (Gambi & Pickering, 2011). In these tasks, speakers describe pictures while they believe that their partner is speaking (simultaneous speaking; Gambi, Van de Cavey, & Pickering, 2013), or preparing to speak (consecutive speaking). Taken together, the evidence suggests that the processes of imagining and anticipating utterances partially overlap with the process of utterance production.
Learning to comprehend using production representations in a connectionist model
University of Liverpool
Production representations are learned from comprehended input. One explicit approach to explaining how this is done is with a connectionist model of sentence production called the Dual-path model (Chang, Dell, & Bock, 2006). The model learns to do production by generating predictions using initially random representations. The mismatch between these predictions and comprehended inputs are used to change the production system, so that it is better able to predict in the future. Over time, the model is able to learn adult-like production representations and the same algorithm can work for several typologically-different languages (e.g., English, Japanese, and German; Chang, 2009; Chang, Baumann, Pappert, & Fitz, submitted). While the model has reasonable coverage of production phenomena such as structural priming, heavy NP shift, and accessibility, it is not clear whether the same architecture can account for comprehension phenomena. In my talk, I will present a new spatial version of the model, which is designed to explain the main paradigm used to study sentence comprehension, namely eye-tracking in the visual world. The model suggests that one important way to understand the relationship between comprehension and production is through their dependencies in language learning.
Does prediction in language comprehension involve language production?
The notion that predicting upcoming linguistic information in language comprehension makes use of the production system has recently received much attention (e.g., Chang et al., 2006; Dell & Chang, 2014; Federmeier, 2007; Pickering & Garrod, 2007, 2013; Van Berkum et al., 2005). So far there has been little experimental evidence for a relation between prediction and production. I will discuss the results of several recent eye-tracking experiments with toddlers (Mani & Huettig, 2012) and adults (Rommers et al. submitted, Hintz et al., in prep.) which provide some support for the view that production abilities are linked to language-mediated anticipatory eye movements. These data however also indicate that production-based prediction is situation-dependent and only one of many mechanisms supporting prediction. Taken together, these results suggest that multiple-mechanism accounts are required to provide a complete picture of anticipatory language processing.
How the tongue betrays the listener: Preliminary evidence for
comprehension-related anticipation effects in speech articulation
Martin Corley and Eleanor Drake
University of Edinburgh
The speech motor system is active during speech comprehension, even when no speech output is required. Recently, there has been a focus on the possibility that this activation reflects the engagement of language production processes in generating predictions of upcoming linguistic material (e.g., Pickering & Garrod, 2007; Schiller et al., 2009). However, speech motor activation during comprehension does not constitute proof that upcoming material in specific is represented via the speech production system.
Here, we present a set of experiments based on the observation that listeners are able to anticipate the surface forms of upcoming words during comprehension (DeLong et al., 2005). In each set of experiments participants are required to name predictable ("tap") or unpredictable ("cap") words after listening to highly constraining sentence fragments ("when we want water, we just turn on the ..."). Evidence from naming latencies suggests that there is no influence, other than semantic, of the anticipated word (for example, there is no onset-overlap advantage, contra Meyer & Schriefers, 1991). However analyses of ultrasound recordings of listeners' tongue movements as they articulate the target words reveal a distinct influence of the predicted words, providing initial evidence that speech motor activity during comprehension may indeed be related to predictive processes.
Is self-monitoring of spoken production based on speech comprehension?
In this talk, I will review studies that have attempted to distinguish between accounts of verbal monitoring based on speech perception, such as Levelt's Perceptual Loop Theory, accounts assuming mechanisms internal to speech production, such as Nozari, Dell, and Schwartz's (2011) conflict monitoring account, and accounts assuming that representations in perception are compared to those predicted by forward models in production (Pickering & Garrod, 2013; in press). I will then present the results of a recent fMRI study that addressed this issue by testing whether speech error processing activates areas of the brain involved in speech perception (temporal areas), involved in conflict monitoring generally (cingulate areas), or areas thought to be involved in forward models of language and thought (cerebellum). Speech errors were elicited using tongue twister sentences (e.g., ?Gerry grabs the ghastly grey goose?). In another condition, subjects listened to somebody else producing tongue twisters. Consistent with a conflict monitoring account, the error vs. correct comparison revealed a network of brain structures that has been related to error processing in action (e.g., ACC), but found no activation in temporal cortex or Cerebellum. Interestingly, observing someone else's speech errors showed a very similar pattern of activation as producing these errors oneself
17.30 Workshop Dinner at Museum het Valkhof
Day 3 - March 28
Lesion correlates of speech comprehension and production
Anna M. Woollams, Rebecca A. Butler, & Matthew A. Lambon Ralph
Neuroscience and Aphasia Research Unit, School of Psychological Sciences, University of Manchester
The study of patients with acquired aphasia can provide us with insights as to the neural regions that are necessary for speech comprehension and production. We present a novel approach to separating the key aspects of chronic aphasic performance and isolating their neural bases. Principal components analysis was used to extract core factors underlying performance of a case series of participants with chronic stroke aphasia on a detailed battery of behavioural assessments. The first factor loaded highly on tests of both receptive and expressive phonology. Phonological processing was uniquely related to left posterior perisylvian regions including Heschl’s gyrus, posterior middle and superior temporal gyri and superior temporal sulcus, as well as the white matter underlying the posterior superior temporal gyrus. The second factor loaded on tests of semantic processing, and was uniquely related to left anterior middle temporal gyrus and the underlying temporal stem. A third executive-cognitive factor was not correlated selectively with the structural integrity of any particular region, as might be expected in light of the widely-distributed and domain-general nature of the regions that support executive functions. The identified phonological and semantic areas align well with those highlighted by studies using other methodologies such as functional neuroimaging and neurostimulation.
Neurobiological evidence for shared infractructure in comprehension and production
MPI Nijmegen/Donders Institute
If production and comprehension share their neuronal substrate, then processing in one modality should lead to adaptation effects in the other modality. I will present functional magnetic resonance imaging data that speak to this issue. Participants either overtly produced or heard descriptions of pictures. We looked for brain regions showing adaptation effects to the repetition of lexical, semantic and syntactic syntactic structures. In order to ensure that not just the same brain regions but also the same neuronal populations within these regions are involved in syntactic processing in speaking and listening, we compared syntactic adaptation effects within processing modalities (syntactic production-to-production and comprehension-to-comprehension priming) with syntactic adaptation effects between processing modalities (syntactic comprehension-to-production and production-to-comprehension priming). We found adaptation effects in areas known to be involved in language processing. For instance we found that repetition of syntactic structure facilitates syntactic processing in the brain within and across processing modalities to the same extent. From these results the conclusion follows that to a large extent (i.e. with the exception of early auditory processes and articulatory operations) the same neurobiological infrastructure seems to subserve speaking and listening.
- Where and when:
Mar 26-28, 2014Museum het Valkhof, Kelfkensbos 59, Nijmegen
- Important deadlines:
- Register by e-mail before: March 1, 2014
- Falk Huettig