Displaying 1 - 10 of 10
-
Mooijman, S., Schoonen, R., Goral, M., Roelofs, A., & Ruiter, M. B. (2025). Why do bilingual speakers with aphasia alternate between languages? A study into their experiences and mixing patterns. Aphasiology. Advance online publication. doi:10.1080/02687038.2025.2452928.
Abstract
Background
The factors that contribute to language alternation by bilingual speakers with aphasia have been debated. Some studies suggest that atypical language mixing results from impairments in language control, while others posit that mixing is a way to enhance communicative effectiveness. To address this question, most prior research examined the appropriateness of language mixing in connected speech tasks.
Aims
The goal of this study was to provide new insight into the question whether language mixing in aphasia reflects a strategy to enhance verbal effectiveness or involuntary behaviour resulting from impaired language control.
Methods & procedures
Semi-structured web-based interviews with bilingual speakers with aphasia (N = 19) with varying language backgrounds were conducted. The interviews were transcribed and coded for: (1) Self-reports regarding language control and compensation, (2) instances of language mixing, and (3) in two cases, instances of repair initiation.
Outcomes & results
The results showed that several participants reported language control difficulties but that the knowledge of additional languages could also be recruited to compensate for lexical retrieval problems. Most participants showed no or very few instances of mixing and the observed mixes appeared to adhere to the pragmatic context and known functions of switching. Three participants exhibited more marked switching behaviour and reported corresponding difficulties with language control. Instances of atypical mixing did not coincide with clear problems initiating conversational repair.
Conclusions
Our study highlights the variability in language mixing patterns of bilingual speakers with aphasia. Furthermore, most of the individuals in the study appeared to be able to effectively control their languages, and to alternate between their languages for compensatory purposes. Control deficits resulting in atypical language mixing were observed in a small number of participants. -
Mooijman, S., Schoonen, R., Roelofs, A., & Ruiter, M. B. (2022). Executive control in bilingual aphasia: A systematic review. Bilingualism: Language and Cognition, 25(1), 13-28. doi:10.1017/S136672892100047X.
Abstract
Much research has been dedicated to the effects of bilingualism on executive control (EC). For bilinguals with aphasia, the interplay with EC is complex. In this systematic review, we synthesize research on this topic and provide an overview of the current state of the field. First, we examine the evidence for EC deficits in bilingual persons with aphasia (bPWA). We then discuss the domain generality of bilingual language control impairments. Finally, we evaluate the bilingual advantage hypothesis in bPWA. We conclude that (1) EC impairments in bPWA are frequently observed, (2) experimental results on the relationship between linguistic and domain-general control are mixed, (3) bPWA with language control problems in everyday communication have domain-general EC problems, and (4) there are indications for EC advantages in bPWA. We end with directions for experimental work that could provide better insight into the intricate relationship between EC and bilingual aphasia. -
Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.
Abstract
Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words. -
Levelt, W. J. M., Meyer, A. S., & Roelofs, A. (2004). Relations of lexical access to neural implementation and syntactic encoding [author's response]. Behavioral and Brain Sciences, 27, 299-301. doi:10.1017/S0140525X04270078.
Abstract
How can one conceive of the neuronal implementation of the processing model we proposed in our target article? In his commentary (Pulvermüller 1999, reprinted here in this issue), Pulvermüller makes various proposals concerning the underlying neural mechanisms and their potential localizations in the brain. These proposals demonstrate the compatibility of our processing model and current neuroscience. We add further evidence on details of localization based on a recent meta-analysis of neuroimaging studies of word production (Indefrey & Levelt 2000). We also express some minor disagreements with respect to Pulvermüller’s interpretation of the “lemma” notion, and concerning his neural modeling of phonological code retrieval. Branigan & Pickering discuss important aspects of syntactic encoding, which was not the topic of the target article. We discuss their well-taken proposal that multiple syntactic frames for a single verb lemma are represented as independent nodes, which can be shared with other verbs, such as accounting for syntactic priming in speech production. We also discuss how, in principle, the alternative multiple-frame-multiplelemma account can be tested empirically. The available evidence does not seem to support that account. -
Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2004). Naming analog clocks conceptually facilitates naming digital clocks. Brain and Language, 90(1-3), 434-440. doi:10.1016/S0093-934X(03)00454-1.
Abstract
This study investigates how speakers of Dutch compute and produce relative time expressions. Naming digital clocks (e.g., 2:45, say ‘‘quarter to three’’) requires conceptual operations on the minute and hour information for the correct relative time expression. The interplay of these conceptual operations was investigated using a repetition priming paradigm. Participants named analog clocks (the primes) directly before naming digital clocks (the targets). The targets referred to the hour (e.g., 2:00), half past the hour (e.g., 2:30), or the coming hour (e.g., 2:45). The primes differed from the target in one or two hour and in five or ten minutes. Digital clock naming latencies were shorter with a five- than with a ten-min difference between prime and target, but the difference in hour had no effect. Moreover, the distance in minutes had only an effect for half past the hour and the coming hour, but not for the hour. These findings suggest that conceptual facilitation occurs when conceptual transformations are shared between prime and target in telling time. -
Roelofs, A. (2004). Seriality of phonological encoding in naming objects and reading their names. Memory & Cognition, 32(2), 212-222.
Abstract
There is a remarkable lack of research bringing together the literatures on oral reading and speaking.
As concerns phonological encoding, both models of reading and speaking assume a process of segmental
spellout for words, which is followed by serial prosodification in models of speaking (e.g., Levelt,
Roelofs, & Meyer, 1999). Thus, a natural place to merge models of reading and speaking would be
at the level of segmental spellout. This view predicts similar seriality effects in reading and object naming.
Experiment 1 showed that the seriality of encoding inside a syllable revealed in previous studies
of speaking is observed for both naming objects and reading their names. Experiment 2 showed that
both object naming and reading exhibit the seriality of the encoding of successive syllables previously
observed for speaking. Experiment 3 showed that the seriality is also observed when object naming and
reading trials are mixed rather than tested separately, as in the first two experiments. These results suggest
that a serial phonological encoding mechanism is shared between naming objects and reading
their names. -
Roelofs, A. (2004). The seduced speaker: Modeling of cognitive control. In A. Belz, R. Evans, & P. Piwek (
Eds. ), Natural language generation. (pp. 1-10). Berlin: Springer.Abstract
Although humans are the ultimate “natural language generators”, the area of psycholinguistic modeling has been somewhat underrepresented in recent approaches to Natural Language Generation in computer science. To draw attention to the area and illustrate its potential relevance to Natural Language Generation, I provide an overview of recent work on psycholinguistic modeling of language production together with some key empirical findings, state-of-the-art experimental techniques, and their historical roots. The techniques include analyses of speech-error corpora, chronometric analyses, eyetracking, and neuroimaging.
The overview is built around the issue of cognitive control in natural language generation, concentrating on the production of single words, which is an essential ingredient of the generation of larger utterances. Most of the work exploited the fact that human speakers are good but not perfect at resisting temptation, which has provided some critical clues about the nature of the underlying system. -
Roelofs, A. (2004). Error biases in spoken word planning and monitoring by aphasic and nonaphasic speakers: Comment on Rapp and Goldrick,2000. Psychological Review, 111(2), 561-572. doi:10.1037/0033-295X.111.2.561.
Abstract
B. Rapp and M. Goldrick (2000) claimed that the lexical and mixed error biases in picture naming by
aphasic and nonaphasic speakers argue against models that assume a feedforward-only relationship
between lexical items and their sounds in spoken word production. The author contests this claim by
showing that a feedforward-only model like WEAVER ++ (W. J. M. Levelt, A. Roelofs, & A. S. Meyer,
1999b) exhibits the error biases in word planning and self-monitoring. Furthermore, it is argued that
extant feedback accounts of the error biases and relevant chronometric effects are incompatible.
WEAVER ++ simulations with self-monitoring revealed that this model accounts for the chronometric
data, the error biases, and the influence of the impairment locus in aphasic speakers. -
Roelofs, A. (2004). Comprehension-based versus production-internal feedback in planning spoken words: A rejoinder to Rapp and Goldrick, 2004. Psychological Review, 111(2), 579-580. doi:10.1037/0033-295X.111.2.579.
Abstract
WEAVER++ has no backward links in its form-production network and yet is able to explain the lexical
and mixed error biases and the mixed distractor latency effect. This refutes the claim of B. Rapp and M.
Goldrick (2000) that these findings specifically support production-internal feedback. Whether their restricted interaction account model can also provide a unified account of the error biases and latency effect remains to be shown. -
Roelofs, A., & Schiller, N. (2004). Produzieren von Ein- und Mehrwortäusserungen. In G. Plehn (
Ed. ), Jahrbuch der Max-Planck Gesellschaft (pp. 655-658). Göttingen: Vandenhoeck & Ruprecht.
Share this page