Displaying 1 - 48 of 48
  • Bosker, H. R., Quené, H., Sanders, T. J. M., & de Jong, N. H. (2014). Native 'um's elicit prediction of low-frequency referents, but non-native 'um's do not. Journal of Memory and Language, 75, 104-116. doi:10.1016/j.jml.2014.05.004.

    Abstract

    Speech comprehension involves extensive use of prediction. Linguistic prediction may be guided by the semantics or syntax, but also by the performance characteristics of the speech signal, such as disfluency. Previous studies have shown that listeners, when presented with the filler uh, exhibit a disfluency bias for discourse-new or unknown referents, drawing inferences about the source of the disfluency. The goal of the present study is to study the contrast between native and non-native disfluencies in speech comprehension. Experiment 1 presented listeners with pictures of high-frequency (e.g., a hand) and low-frequency objects (e.g., a sewing machine) and with fluent and disfluent instructions. Listeners were found to anticipate reference to low-frequency objects when encountering disfluency, thus attributing disfluency to speaker trouble in lexical retrieval. Experiment 2 showed that, when participants listened to disfluent non-native speech, no anticipation of low-frequency referents was observed. We conclude that listeners can adapt their predictive strategies to the (non-native) speaker at hand, extending our understanding of the role of speaker identity in speech comprehension.
  • Bosker, H. R., Quené, H., Sanders, T. J. M., & de Jong, N. H. (2014). The perception of fluency in native and non-native speech. Language Learning, 64, 579-614. doi:10.1111/lang.12067.

    Abstract

    Where native speakers supposedly are fluent by default, non-native speakers often have to strive hard to achieve a native-like fluency level. However, disfluencies (such as pauses, fillers, repairs, etc.) occur in both native and non-native speech and it is as yet unclear ow luency raters weigh the fluency characteristics of native and non-native speech. Two rating experiments compared the way raters assess the luency of native and non-native speech. The fluency characteristics of native and non- native speech were controlled by using phonetic anipulations in pause (Experiment 1) and speed characteristics (Experiment 2). The results show that the ratings on manipulated native and on-native speech were affected in a similar fashion. This suggests that there is no difference in the way listeners weigh the fluency haracteristics of native and non-native speakers.
  • Bosker, H. R. (2014). The processing and evaluation of fluency in native and non-native speech. PhD Thesis, Utrecht University, Utrecht.

    Abstract

    Disfluency is a common characteristic of spontaneously produced speech. Disfluencies (e.g., silent pauses, filled pauses [uh’s and uhm’s], corrections, repetitions, etc.) occur in both native and non-native speech. There appears to be an apparent contradiction between claims from the evaluative and cognitive approach to fluency. On the one hand, the evaluative approach shows that non-native disfluencies have a negative effect on listeners’ subjective fluency impressions. On the other hand, the cognitive approach reports beneficial effects of native disfluencies on cognitive processes involved in speech comprehension, such as prediction and attention.

    This dissertation aims to resolve this apparent contradiction by combining the evaluative and cognitive approach. The reported studies target both the evaluation (Chapters 2 and 3) and the processing of fluency (Chapters 4 and 5) in native and non-native speech. Thus, it provides an integrative account of native and non-native fluency perception, informative to both language testing practice and cognitive psycholinguists. The proposed account of fluency perception testifies to the notion that speech performance matters: communication through spoken language does not only depend on what is said, but also on how it is said and by whom.
  • Bosker, H. R. (2014). The processing and evaluation of fluency in native and non-native speech. Research Note for Pearson Language Testing.
  • Chen, A. (2014). Production-comprehension (A)Symmetry: Individual differences in the acquisition of prosodic focus-marking. In N. Campbell, D. Gibbon, & D. Hirst (Eds.), Proceedings of Speech Prosody 2014 (pp. 423-427).

    Abstract

    Previous work based on different groups of children has shown that four- to five-year-old children are similar to adults in both producing and comprehending the focus-toaccentuation mapping in Dutch, contra the alleged productionprecedes- comprehension asymmetry in earlier studies. In the current study, we addressed the question of whether there are individual differences in the production-comprehension (a)symmetricity. To this end, we examined the use of prosody in focus marking in production and the processing of focusrelated prosody in online language comprehension in the same group of 4- to 5-year-olds. We have found that the relationship between comprehension and production can be rather diverse at an individual level. This result suggests some degree of independence in learning to use prosody to mark focus in production and learning to process focus-related prosodic information in online language comprehension, and implies influences of other linguistic and non-linguistic factors on the production-comprehension (a)symmetricity
  • Chen, A., Chen, A., Kager, R., & Wong, P. (2014). Rises and falls in Dutch and Mandarin Chinese. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 83-86).

    Abstract

    Despite of the different functions of pitch in tone and nontone languages, rises and falls are common pitch patterns across different languages. In the current study, we ask what is the language specific phonetic realization of rises and falls. Chinese and Dutch speakers participated in a production experiment. We used contexts composed for conveying specific communicative purposes to elicit rises and falls. We measured both tonal alignment and tonal scaling for both patterns. For the alignment measurements, we found language specific patterns for the rises, but for falls. For rises, both peak and valley were aligned later among Chinese speakers compared to Dutch speakers. For all the scaling measurements (maximum pitch, minimum pitch, and pitch range), no language specific patterns were found for either the rises or the falls
  • Chu, M., Meyer, A. S., Foulkes, L., & Kita, S. (2014). Individual differences in frequency and saliency of speech-accompanying gestures: The role of cognitive abilities and empathy. Journal of Experimental Psychology: General, 143, 694-709. doi:10.1037/a0033861.

    Abstract

    The present study concerns individual differences in gesture production. We used correlational and multiple regression analyses to examine the relationship between individuals’ cognitive abilities and empathy levels and their gesture frequency and saliency. We chose predictor variables according to experimental evidence of the functions of gesture in speech production and communication. We examined 3 types of gestures: representational gestures, conduit gestures, and palm-revealing gestures. Higher frequency of representational gestures was related to poorer visual and spatial working memory, spatial transformation ability, and conceptualization ability; higher frequency of conduit gestures was related to poorer visual working memory, conceptualization ability, and higher levels of empathy; and higher frequency of palm-revealing gestures was related to higher levels of empathy. The saliency of all gestures was positively related to level of empathy. These results demonstrate that cognitive abilities and empathy levels are related to individual differences in gesture frequency and saliency
  • Evans, S., McGettigan, C., Agnew, Z., Rosen, S., Cesar, L., Boebinger, D., Ostarek, M., Chen, S. H., Richards, A., Meekins, S., & Scott, S. K. (2014). The neural basis of informational and energetic masking effects in the perception and production of speech [abstract]. The Journal of the Acoustical Society of America, 136(4), 2243. doi:10.1121/1.4900096.

    Abstract

    When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked by many different kinds of sounds, for example, machinery noise and the speech of others, and these different sounds place differing demands on cognitive resources. In this talk, I will present data from a series of functional magnetic resonance imaging (fMRI) studies in which the informational properties of background sounds have been manipulated to make them more or less similar to speech. I will demonstrate the neural effects associated with speaking over and listening to these sounds, and demonstrate how in perception these effects are modulated by the age of the listener. The results will be interpreted within a framework of auditory processing developed from primate neurophysiology and human functional imaging work (Rauschecker and Scott 2009).
  • Ganushchak, L., Konopka, A. E., & Chen, Y. (2014). What the eyes say about planning of focused referents during sentence formulation: a cross-linguistic investigation. Frontiers in Psychology, 5: 1124. doi:10.3389/fpsyg.2014.01124.

    Abstract

    This study investigated how sentence formulation is influenced by a preceding discourse context. In two eye-tracking experiments, participants described pictures of two-character transitive events in Dutch (Experiment 1) and Chinese (Experiment 2). Focus was manipulated by presenting questions before each picture. In the Neutral condition, participants first heard ‘What is happening here?’ In the Object or Subject Focus conditions, the questions asked about the Object or Subject character (What is the policeman stopping? Who is stopping the truck?). The target response was the same in all conditions (The policeman is stopping the truck). In both experiments, sentence formulation in the Neutral condition showed the expected pattern of speakers fixating the subject character (policeman) before the object character (truck). In contrast, in the focus conditions speakers rapidly directed their gaze preferentially only to the character they needed to encode to answer the question (the new, or focused, character). The timing of gaze shifts to the new character varied by language group (Dutch vs. Chinese): shifts to the new character occurred earlier when information in the question can be repeated in the response with the same syntactic structure (in Chinese but not in Dutch). The results show that discourse affects the timecourse of linguistic formulation in simple sentences and that these effects can be modulated by language-specific linguistic structures such as parallels in the syntax of questions and declarative sentences.
  • Ganushchak, L. Y., & Acheson, D. J. (Eds.). (2014). What's to be learned from speaking aloud? - Advances in the neurophysiological measurement of overt language production. [Research topic] [Special Issue]. Frontiers in Language Sciences. Retrieved from http://www.frontiersin.org/Language_Sciences/researchtopics/What_s_to_be_Learned_from_Spea/1671.

    Abstract

    Researchers have long avoided neurophysiological experiments of overt speech production due to the suspicion that artifacts caused by muscle activity may lead to a bad signal-to-noise ratio in the measurements. However, the need to actually produce speech may influence earlier processing and qualitatively change speech production processes and what we can infer from neurophysiological measures thereof. Recently, however, overt speech has been successfully investigated using EEG, MEG, and fMRI. The aim of this Research Topic is to draw together recent research on the neurophysiological basis of language production, with the aim of developing and extending theoretical accounts of the language production process. In this Research Topic of Frontiers in Language Sciences, we invite both experimental and review papers, as well as those about the latest methods in acquisition and analysis of overt language production data. All aspects of language production are welcome: i.e., from conceptualization to articulation during native as well as multilingual language production. Focus should be placed on using the neurophysiological data to inform questions about the processing stages of language production. In addition, emphasis should be placed on the extent to which the identified components of the electrophysiological signal (e.g., ERP/ERF, neuronal oscillations, etc.), brain areas or networks are related to language comprehension and other cognitive domains. By bringing together electrophysiological and neuroimaging evidence on language production mechanisms, a more complete picture of the locus of language production processes and their temporal and neurophysiological signatures will emerge.
  • Guerra, E., Huettig, F., & Knoeferle, P. (2014). Assessing the time course of the influence of featural, distributional and spatial representations during reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2309-2314). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2014/papers/402/.

    Abstract

    What does semantic similarity between two concepts mean? How could we measure it? The way in which semantic similarity is calculated might differ depending on the theoretical notion of semantic representation. In an eye-tracking reading experiment, we investigated whether two widely used semantic similarity measures (based on featural or distributional representations) have distinctive effects on sentence reading times. In other words, we explored whether these measures of semantic similarity differ qualitatively. In addition, we examined whether visually perceived spatial distance interacts with either or both of these measures. Our results showed that the effect of featural and distributional representations on reading times can differ both in direction and in its time course. Moreover, both featural and distributional information interacted with spatial distance, yet in different sentence regions and reading measures. We conclude that featural and distributional representations are distinct components of semantic representation.
  • Guerra, E., & Knoeferle, P. (2014). Spatial distance modulates reading times for sentences about social relations: evidence from eye tracking. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2315-2320). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2014/papers/403/.

    Abstract

    Recent evidence from eye tracking during reading showed that non-referential spatial distance presented in a visual context can modulate semantic interpretation of similarity relations rapidly and incrementally. In two eye-tracking reading experiments we extended these findings in two important ways; first, we examined whether other semantic domains (social relations) could also be rapidly influenced by spatial distance during sentence comprehension. Second, we aimed to further specify how abstract language is co-indexed with spatial information by varying the syntactic structure of sentences between experiments. Spatial distance rapidly modulated reading times as a function of the social relation expressed by a sentence. Moreover, our findings suggest that abstract language can be co-indexed as soon as critical information becomes available for the reader.
  • Huettig, F. (2014). Role of prediction in language learning. In P. J. Brooks, & V. Kempe (Eds.), Encyclopedia of language development (pp. 479-481). London: Sage Publications.
  • Huettig, F., & Mishra, R. K. (2014). How literacy acquisition affects the illiterate mind - A critical examination of theories and evidence. Language and Linguistics Compass, 8(10), 401-427. doi:10.1111/lnc3.12092.

    Abstract

    At present, more than one-fifth of humanity is unable to read and write. We critically examine experimental evidence and theories of how (il)literacy affects the human mind. In our discussion we show that literacy has significant cognitive consequences that go beyond the processing of written words and sentences. Thus, cultural inventions such as reading shape general cognitive processing in non-trivial ways. We suggest that this has important implications for educational policy and guidance as well as research into cognitive processing and brain functioning.
  • Janse, E., & Jesse, A. (2014). Working memory affects older adults’ use of context in spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 1842-1862. doi:10.1080/17470218.2013.879391.

    Abstract

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate, however, older listeners’ ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether working memory predicts older adults’ ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) mainly affected the speed of recognition, with only a marginal effect on detection accuracy. Contextual facilitation was modulated by older listeners’ working memory and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners’ immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

    Files private

    Request files
  • Konopka, A. E., & Brown-Schmidt, S. (2014). Message encoding. In V. Ferreira, M. Goldrick, & M. Miozzo (Eds.), The Oxford handbook of language production (pp. 3-20). New York: Oxford University Press.
  • Konopka, A. E., & Meyer, A. S. (2014). Priming sentence planning. Cognitive Psychology, 73, 1-40. doi:10.1016/j.cogpsych.2014.04.001.

    Abstract

    Sentence production requires mapping preverbal messages onto linguistic structures. Because sentences are normally built incrementally, the information encoded in a sentence-initial increment is critical for explaining how the mapping process starts and for predicting its timecourse. Two experiments tested whether and when speakers prioritize encoding of different types of information at the outset of formulation by comparing production of descriptions of transitive events (e.g., A dog is chasing the mailman) that differed on two dimensions: the ease of naming individual characters and the ease of apprehending the event gist (i.e., encoding the relational structure of the event). To additionally manipulate ease of encoding, speakers described the target events after receiving lexical primes (facilitating naming; Experiment 1) or structural primes (facilitating generation of a linguistic structure; Experiment 2). Both properties of the pictured events and both types of primes influenced the form of target descriptions and the timecourse of formulation: character-specific variables increased the probability of speakers encoding one character with priority at the outset of formulation, while the ease of encoding event gist and of generating a syntactic structure increased the likelihood of early encoding of information about both characters. The results show that formulation is flexible and highlight some of the conditions under which speakers might employ different planning strategies.
  • Lev-Ari, S., & Peperkamp, S. (2014). Do people converge to the linguistic patterns of non-reliable speakers? Perceptual learning from non-native speakers. In S. Fuchs, M. Grice, A. Hermes, L. Lancia, & D. Mücke (Eds.), Proceedings of the 10th International Seminar on Speech Production (ISSP) (pp. 261-264).

    Abstract

    People's language is shaped by the input from the environment. The environment, however, offers a range of linguistic inputs that differ in their reliability. We test whether listeners accordingly weigh input from sources that differ in reliability differently. Using a perceptual learning paradigm, we show that listeners adjust their representations according to linguistic input provided by native but not by non-native speakers. This is despite the fact that listeners are able to learn the characteristics of the speech of both speakers. These results provide evidence for a disassociation between adaptation to the characteristic of specific speakers and adjustment of linguistic representations in general based on these learned characteristics. This study also has implications for theories of language change. In particular, it cast doubts on the hypothesis that a large proportion of non-native speakers in a community can bring about linguistic changes
  • Lev-Ari, S., & Peperkamp, S. (2014). An experimental study of the role of social factors in sound change. Laboratory Phonology, 5(3), 379-401. doi:10.1515/lp-2014-0013.

    Abstract

    There is great variation in whether foreign sounds in loanwords are adapted or retained. Importantly, the retention of foreign sounds can lead to a sound change in the language. We propose that social factors influence the likelihood of loanword sound adaptation, and use this case to introduce a novel experimental paradigm for studying language change that captures the role of social factors. Specifically, we show that the relative prestige of the donor language in the loanword's semantic domain influences the rate of sound adaptation. We further show that speakers adapt to the performance of their ‘community’, and that this adaptation leads to the creation of a norm. The results of this study are thus the first to show an effect of social factors on loanword sound adaptation in an experimental setting. Moreover, they open up a new domain of experimentally studying language change in a manner that integrates social factors
  • Lev-Ari, S., San Giacomo, M., & Peperkamp, S. (2014). The effect of domain prestige and interlocutors’ bilingualism on sound adaptation. Journal of Sociolinguistics, 18(5), 658-684. doi:10.1111/josl.12102.

    Abstract

    There is great variability in whether foreign sounds in loanwords are adapted, such that segments show cross-word and cross-situational variation in adaptation. Previous research proposed that word frequency, speakers' level of bilingualism and neighborhoods' level of bilingualism can explain such variability. We test for the effect of these factors and propose two additional factors: interlocutors' level of bilingualism and the prestige of the donor language in the loanword's domain. Analyzing elicited productions of loanwords from Spanish into Mexicano in a village where Spanish and Mexicano enjoy prestige in complementary domains, we show that interlocutors' bilingualism and prestige influence the rate of sound adaptation. Additionally, we find that speakers accommodate to their interlocutors, regardless of the interlocutors' level of bilingualism. As retention of foreign sounds can lead to sound change, these results show that social factors can influence changes in a language's sound system.
  • Lev-Ari, S., & Peperkamp, S. (2014). The influence of inhibitory skill on phonological representations in production and perception. Journal of Phonetics, 47, 36-46. doi:10.1016/j.wocn.2014.09.001.

    Abstract

    Inhibition is known to play a role in speech perception and has been hypothesized to likewise influence speech production. In this paper we test whether individual differences in inhibitory skill can lead to individual differences in phonological representations in perception and production. We further examine whether the type of inhibition that influences phonological representation is domain-specific or domain-general. Native French speakers read aloud sentences with words containing a voiced stop that either have a voicing neighbor (target) or not (control). The duration of pre-voicing was measured. Participants similarly performed a lexical decision task on versions of these target and matched control words whose pre-voicing duration was manipulated. Lastly, participants performed linguistic and non-linguistic inhibition tasks. Results indicate that the lower speakers' linguistic or non-linguistic inhibition is, the easier it is for them to recognize words with a voiceless neighbor when these words have a shorter, intermediate, pre-voicing rather than a longer one. Inhibitory skill did not predict recognition time for control words, indicating that the effect was due to the greater activation of the voiceless neighbor. Inhibition did not predict pre-voicing duration in production. These results indicate that individual differences in cognitive skills can influence phonological representations in speech perception.
  • Liu, Z., Chen, A., & Van de Velde, H. (2014). Prosodic focus marking in Bai. In N. Campbell, D. Gibbon, & D. Hirst (Eds.), Proceedings of Speech Prosody 2014 (pp. 628-631).

    Abstract

    This study investigates prosodic marking of focus in Bai, a Sino-Tibetan language spoken in the Southwest of China, by adopting a semi-spontaneous experimental approach. Our data show that Bai speakers increase the duration of the focused constituent and reduce the duration of the post-focus constituent to encode focus. However, duration is not used in Bai to distinguish focus types differing in size and contrastivity. Further, pitch plays no role in signaling focus and differentiating focus types. The results thus suggest that Bai uses prosody to mark focus, but to a lesser extent, compared to Mandarin Chinese, with which Bai has been in close contact for centuries, and Cantonese, to which Bai is similar in the tonal system, although Bai is similar to Cantonese in its reliance on duration in prosodic focus marking.
  • Mani, N., & Huettig, F. (2014). Word reading skill predicts anticipation of upcoming spoken language input: A study of children developing proficiency in reading. Journal of Experimental Child Psychology, 126, 264-279. doi:10.1016/j.jecp.2014.05.004.

    Abstract

    Despite the efficiency with which language users typically process spoken language, a growing body of research finds substantial individual differences in both the speed and accuracy of spoken language processing potentially attributable to participants’ literacy skills. Against this background, the current study takes a look at the role of word reading skill in listener’s anticipation of upcoming spoken language input in children at the cusp of learning to read: if reading skills impact predictive language processing, then children at this stage of literacy acquisition should be most susceptible to the effects of reading skills on spoken language processing. We tested 8-year-old children on their prediction of upcoming spoken language input in an eye-tracking task. While children, like in previous studies to-date, were successfully able to anticipate upcoming spoken language input, there was a strong positive correlation between children’s word reading (but not their pseudo-word reading and meta-phonological awareness or their spoken word recognition) skills and their prediction skills. We suggest that these findings are most compatible with the notion that the process of learning orthographic representations during reading acquisition sharpens pre-existing lexical representations which in turn also supports anticipation of upcoming spoken words.
  • McQueen, J. M., & Huettig, F. (2014). Interference of spoken word recognition through phonological priming from visual objects and printed words. Attention, Perception & Psychophysics, 76, 190-200. doi:10.3758/s13414-013-0560-8.

    Abstract

    Three cross-modal priming experiments examined the influence of pre-exposure to
    pictures and printed words on the speed of spoken word recognition. Targets for
    auditory lexical decision were spoken Dutch words and nonwords, presented in
    isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory
    stimuli were preceded by primes which were pictures (Experiments 1 and 3) or those pictures’ printed names (Experiment 2). Prime-target pairs were phonologically onsetrelated (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijlzwaard, arrow-sword), or were unrelated on both dimensions. Phonological
    interference and semantic facilitation were observed in all experiments. Priming
    magnitude was similar for pictures and printed words, and did not vary with picture
    viewing time or number of pictures in the display (either one or four). These effects
    arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision-making. This suggests
    that, by default, processing of related pictures and printed words influences how
    quickly we recognize related spoken words.
  • Olivers, C. N. L., Huettig, F., Singh, J. P., & Mishra, R. K. (2014). The influence of literacy on visual search. Visual Cognition, 21, 74-101. doi:10.1080/13506285.2013.875498.

    Abstract

    Currently one in five adults is still unable to read despite a rapidly developing world. Here we show that (il)literacy has important consequences for the cognitive ability of selecting relevant information from a visual display of non-linguistic material. In two experiments we compared low to high literacy observers on both an easy and a more difficult visual search task involving different types of chicken. Low literates were consistently slower (as indicated by overall RTs) in both experiments. More detailed analyses, including eye movement measures, suggest that the slowing is partly due to display wide (i.e. parallel) sensory processing but mainly due to post-selection processes, as low literates needed more time between fixating the target and generating a manual response. Furthermore, high and low literacy groups differed in the way search performance was distributed across the visual field. High literates performed relatively better when the target was presented in central regions, especially on the right. At the same time, high literacy was also associated with a more general bias towards the top and the left, especially in the more difficult search. We conclude that learning to read results in an extension of the functional visual field from the fovea to parafoveal areas, combined with some asymmetry in scan pattern influenced by the reading direction, both of which also influence other (e.g. non-linguistic) tasks such as visual search.

    Files private

    Request files
  • Pinget, A.-F., Bosker, H. R., Quené, H., & de Jong, N. H. (2014). Native speakers' perceptions of fluency and accent in L2 speech. Language Testing, 31, 349-365. doi:10.1177/0265532214526177.

    Abstract

    Oral fluency and foreign accent distinguish L2 from L1 speech production. In language testing practices, both fluency and accent are usually assessed by raters. This study investigates what exactly native raters of fluency and accent take into account when judging L2. Our aim is to explore the relationship between objectively measured temporal, segmental and suprasegmental properties of speech on the one hand, and fluency and accent as rated by native raters on the other hand. For 90 speech fragments from Turkish and English L2 learners of Dutch, several acoustic measures of fluency and accent were calculated. In Experiment 1, 20 native speakers of Dutch rated the L2 Dutch samples on fluency. In Experiment 2, 20 different untrained native speakers of Dutch judged the L2 Dutch samples on accentedness. Regression analyses revealed that acoustic measures of fluency were good predictors of fluency ratings. Secondly, segmental and suprasegmental measures of accent could predict some variance of accent ratings. Thirdly, perceived fluency and perceived accent were only weakly related. In conclusion, this study shows that fluency and perceived foreign accent can be judged as separate constructs.
  • Poellmann, K., Bosker, H. R., McQueen, J. M., & Mitterer, H. (2014). Perceptual adaptation to segmental and syllabic reductions in continuous spoken Dutch. Journal of Phonetics, 46, 101-127. doi:10.1016/j.wocn.2014.06.004.

    Abstract

    This study investigates if and how listeners adapt to reductions in casual continuous speech. In a perceptual-learning variant of the visual-world paradigm, two groups of Dutch participants were exposed to either segmental (/b/ → [ʋ]) or syllabic (ver- → [fː]) reductions in spoken Dutch sentences. In the test phase, both groups heard both kinds of reductions, but now applied to different words. In one of two experiments, the segmental reduction exposure group was better than the syllabic reduction exposure group in recognizing new reduced /b/-words. In both experiments, the syllabic reduction group showed a greater target preference for new reduced ver-words. Learning about reductions was thus applied to previously unheard words. This lexical generalization suggests that mechanisms compensating for segmental and syllabic reductions take place at a prelexical level, and hence that lexical access involves an abstractionist mode of processing. Existing abstractionist models need to be revised, however, as they do not include representations of sequences of segments (corresponding e.g. to ver-) at the prelexical level.
  • Reifegerste, J. (2014). Morphological processing in younger and older people: Evidence for flexible dual-route access. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Roswandowitz, C., Mathias, S. R., Hintz, F., Kreitewolf, J., Schelinski, S., & von Kriegstein, K. (2014). Two cases of selective developmental voice-recognition impairments. Current Biology, 24(19), 2348-2353. doi:10.1016/j.cub.2014.08.048.

    Abstract

    Recognizing other individuals is an essential skill in humans and in other species [1, 2 and 3]. Over the last decade, it has become increasingly clear that person-identity recognition abilities are highly variable. Roughly 2% of the population has developmental prosopagnosia, a congenital deficit in recognizing others by their faces [4]. It is currently unclear whether developmental phonagnosia, a deficit in recognizing others by their voices [5], is equally prevalent, or even whether it actually exists. Here, we aimed to identify cases of developmental phonagnosia. We collected more than 1,000 data sets from self-selected German individuals by using a web-based screening test that was designed to assess their voice-recognition abilities. We then examined potentially phonagnosic individuals by using a comprehensive laboratory test battery. We found two novel cases of phonagnosia: AS, a 32-year-old female, and SP, a 32-year-old male; both are otherwise healthy academics, have normal hearing, and show no pathological abnormalities in brain structure. The two cases have comparable patterns of impairments: both performed at least 2 SDs below the level of matched controls on tests that required learning new voices, judging the familiarity of famous voices, and discriminating pitch differences between voices. In both cases, only voice-identity processing per se was affected: face recognition, speech intelligibility, emotion recognition, and musical ability were all comparable to controls. The findings confirm the existence of developmental phonagnosia as a modality-specific impairment and allow a first rough prevalence estimate.

    Files private

    Request files
  • Shao, Z., Roelofs, A., Acheson, D. J., & Meyer, A. S. (2014). Electrophysiological evidence that inhibition supports lexical selection in picture naming. Brain Research, 1586, 130-142. doi:10.1016/j.brainres.2014.07.009.

    Abstract

    We investigated the neural basis of inhibitory control during lexical selection. Participants overtly named pictures while response times (RTs) and event-related brain potentials (ERPs) were recorded. The difficulty of lexical selection was manipulated by using object and action pictures with high name agreement (few response candidates) versus low name agreement (many response candidates). To assess the involvement of inhibition, we conducted delta plot analyses of naming RTs and examined the N2 component of the ERP. We found longer mean naming RTs and a larger N2 amplitude in the low relative to the high name agreement condition. For action naming we found a negative correlation between the slopes of the slowest delta segment and the difference in N2 amplitude between the low and high name agreement conditions. The converging behavioral and electrophysiological evidence suggests that selective inhibition is engaged to reduce competition during lexical selection in picture naming.
  • Shao, Z., Roelofs, A., & Meyer, A. S. (2014). Predicting naming latencies for action pictures: Dutch norms. Behavior Research Methods, 46, 274-283. doi:10.3758/s13428-013-0358-6.

    Abstract

    The present study provides Dutch norms for age of acquisition, familiarity, imageability, image agreement, visual complexity, word frequency, and word length (in syllables) for 124 line drawings of actions. Ratings were obtained from 117 Dutch participants. Word frequency was determined on the basis of the SUBTLEX-NL corpus (Keuleers, Brysbaert, & New, Behavior Research Methods, 42, 643–650, 2010). For 104 of the pictures, naming latencies and name agreement were determined in a separate naming experiment with 74 native speakers of Dutch. The Dutch norms closely corresponded to the norms for British English. Multiple regression analysis showed that age of acquisition, imageability, image agreement, visual complexity, and name agreement were significant predictors of naming latencies, whereas word frequency and word length were not. Combined with the results of a principal-component analysis, these findings suggest that variables influencing the processes of conceptual preparation and lexical selection affect latencies more strongly than do variables influencing word-form encoding.

    Additional information

    Shao_Behav_Res_2013_Suppl_Mat.doc
  • Shao, Z., Janse, E., Visser, K., & Meyer, A. S. (2014). What do verbal fluency tasks measure? Predictors of verbal fluency performance in older adults. Frontiers in Psychology, 5: 772. doi:10.3389/fpsyg.2014.00772.

    Abstract

    This study examined the contributions of verbal ability and executive control to verbal fluency performance in older adults (n=82). Verbal fluency was assessed in letter and category fluency tasks, and performance on these tasks was related to indicators of vocabulary size, lexical access speed, updating, and inhibition ability. In regression analyses the number of words produced in both fluency tasks was predicted by updating ability, and the speed of the first response was predicted by vocabulary size and, for category fluency only, lexical access speed. These results highlight the hybrid character of both fluency tasks, which may limit their usefulness for research and clinical purposes.
  • Simon, E., & Sjerps, M. J. (2014). Developing non-native vowel representations: a study on child second language acquisition. COPAL: Concordia Working Papers in Applied Linguistics, 5, 693-708.

    Abstract

    This study examines what stage 9‐12‐year‐old Dutch‐speaking children have reached in the development of their L2 lexicon, focusing on its phonological specificity. Two experiments were carried out with a group of Dutch‐speaking children and adults learning English. In a first task, listeners were asked to judge Dutch words which were presented with either the target Dutch vowel or with an English vowel synthetically inserted. The second experiment was a mirror of the first, i.e. with English words and English or Dutch vowels inserted. It was examined to what extent the listeners accepted substitutions of Dutch vowels by English ones, and vice versa. The results of the experiments suggest that the children have not reached the same degree of phonological specificity of L2 words as the adults. Children not only experience a strong influence of their native vowel categories when listening to L2 words, they also apply less strict criteria.
  • Simon, E., Sjerps, M. J., & Fikkert, P. (2014). Phonological representations in children’s native and non-native lexicon. Bilingualism: Language and Cognition, 17(1), 3-21. doi:10.1017/S1366728912000764.

    Abstract

    This study investigated the phonological representations of vowels in children's native and non-native lexicons. Two experiments were mispronunciation tasks (i.e., a vowel in words was substituted by another vowel from the same language). These were carried out by Dutch-speaking 9–12-year-old children and Dutch-speaking adults, in their native (Experiment 1, Dutch) and non-native (Experiment 2, English) language. A third experiment tested vowel discrimination. In Dutch, both children and adults could accurately detect mispronunciations. In English, adults, and especially children, detected substitutions of native vowels (i.e., vowels that are present in the Dutch inventory) by non-native vowels more easily than changes in the opposite direction. Experiment 3 revealed that children could accurately discriminate most of the vowels. The results indicate that children's L1 categories strongly influenced their perception of English words. However, the data also reveal a hint of the development of L2 phoneme categories.

    Additional information

    Simon_SuppMaterial.pdf
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Examining strains and symptoms of the ‘Literacy Virus’: The effects of orthographic transparency on phonological processing in a connectionist model of reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    The effect of literacy on phonological processing has been described in terms of a virus that “infects all speech processing” (Frith, 1998). Empirical data has established that literacy leads to changes to the way in which phonological information is processed. Harm & Seidenberg (1999) demonstrated that a connectionist network trained to map between English orthographic and phonological representations display’s more componential phonological processing than a network trained only to stably represent the phonological forms of words. Within this study we use a similar model yet manipulate the transparency of orthographic-to-phonological mappings. We observe that networks trained on a transparent orthography are better at restoring phonetic features and phonemes. However, networks trained on non-transparent orthographies are more likely to restore corrupted phonological segments with legal, coarser linguistic units (e.g. onset, coda). Our study therefore provides an explicit description of how differences in orthographic transparency can lead to varying strains and symptoms of the ‘literacy virus’.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). A comprehensive model of spoken word recognition must be multimodal: Evidence from studies of language-mediated visual attention. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014). Austin, TX: Cognitive Science Society.

    Abstract

    When processing language, the cognitive system has access to information from a range of modalities (e.g. auditory, visual) to support language processing. Language mediated visual attention studies have shown sensitivity of the listener to phonological, visual, and semantic similarity when processing a word. In a computational model of language mediated visual attention, that models spoken word processing as the parallel integration of information from phonological, semantic and visual processing streams, we simulate such effects of competition within modalities. Our simulations raised untested predictions about stronger and earlier effects of visual and semantic similarity compared to phonological similarity around the rhyme of the word. Two visual world studies confirmed these predictions. The model and behavioral studies suggest that, during spoken word comprehension, multimodal information can be recruited rapidly to constrain lexical selection to the extent that phonological rhyme information may exert little influence on this process.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Modelling language – vision interactions in the hub and spoke framework. In J. Mayor, & P. Gomez (Eds.), Computational Models of Cognitive Processes: Proceedings of the 13th Neural Computation and Psychology Workshop (NCPW13). (pp. 3-16). Singapore: World Scientific Publishing.

    Abstract

    Multimodal integration is a central characteristic of human cognition. However our understanding of the interaction between modalities and its influence on behaviour is still in its infancy. This paper examines the value of the Hub & Spoke framework (Plaut, 2002; Rogers et al., 2004; Dilkina et al., 2008; 2010) as a tool for exploring multimodal interaction in cognition. We present a Hub and Spoke model of language–vision information interaction and report the model’s ability to replicate a range of phonological, visual and semantic similarity word-level effects reported in the Visual World Paradigm (Cooper, 1974; Tanenhaus et al, 1995). The model provides an explicit connection between the percepts of language and the distribution of eye gaze and demonstrates the scope of the Hub-and-Spoke architectural framework by modelling new aspects of multimodal cognition.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Literacy effects on language and vision: Emergent effects from an amodal shared resource (ASR) computational model. Cognitive Psychology, 75, 28-54. doi:10.1016/j.cogpsych.2014.07.002.

    Abstract

    Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates’ eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing – the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings.
  • Stine-Morrow, E., Payne, B., Roberts, B., Kramer, A., Morrow, D., Payne, L., Hill, P., Jackson, J., Gao, X., Noh, S., Janke, M., & Parisi, J. (2014). Training versus engagement as paths to cognitive enrichment with aging. Psychology and Aging, 29, 891-906. doi:10.1037/a0038244.

    Abstract

    While a training model of cognitive intervention targets the improvement of particular skills through instruction and practice, an engagement model is based on the idea that being embedded in an intellectually and socially complex environment can impact cognition, perhaps even broadly, without explicit instruction. We contrasted these 2 models of cognitive enrichment by randomly assigning healthy older adults to a home-based inductive reasoning training program, a team-based competitive program in creative problem solving, or a wait-list control. As predicted, those in the training condition showed selective improvement in inductive reasoning. Those in the engagement condition, on the other hand, showed selective improvement in divergent thinking, a key ability exercised in creative problem solving. On average, then, both groups appeared to show ability-specific effects. However, moderators of change differed somewhat for those in the engagement and training interventions. Generally, those who started either intervention with a more positive cognitive profile showed more cognitive growth, suggesting that cognitive resources enabled individuals to take advantage of environmental enrichment. Only in the engagement condition did initial levels of openness and social network size moderate intervention effects on cognition, suggesting that comfort with novelty and an ability to manage social resources may be additional factors contributing to the capacity to take advantage of the environmental complexity associated with engagement. Collectively, these findings suggest that training and engagement models may offer alternative routes to cognitive resilience in late life

    Files private

    Request files
  • Tooley, K., Konopka, A. E., & Watson, D. (2014). Can intonational phrase structure be primed (like syntactic structure)? Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(2), 348-363. doi:10.1037/a0034900.

    Abstract

    In 3 experiments, we investigated whether intonational phrase structure can be primed. In all experiments, participants listened to sentences in which the presence and location of intonational phrase boundaries were manipulated such that the recording included either no intonational phrase boundaries, a boundary in a structurally dispreferred location, a boundary in a preferred location, or boundaries in both locations. In Experiment 1, participants repeated the sentences to test whether they would reproduce the prosodic structure they had just heard. Experiments 2 and 3 used a prime–target paradigm to evaluate whether the intonational phrase structure heard in the prime sentence might influence that of a novel target sentence. Experiment 1 showed that participants did repeat back sentences that they had just heard with the original intonational phrase structure, yet Experiments 2 and 3 found that exposure to intonational phrase boundaries on prime trials did not influence how a novel target sentence was prosodically phrased. These results suggest that speakers may retain the intonational phrasing of a sentence, but this effect is not long-lived and does not generalize across unrelated sentences. Furthermore, these findings provide no evidence that intonational phrase structure is formulated during a planning stage that is separate from other sources of linguistic information.
  • Van de Velde, M., Meyer, A. S., & Konopka, A. E. (2014). Message formulation and structural assembly: Describing "easy" and "hard" events with preferred and dispreferred syntactic structures. Journal of Memory and Language, 71(1), 124-144. doi:10.1016/j.jml.2013.11.001.

    Abstract

    When formulating simple sentences to describe pictured events, speakers look at the referents they are describing in the order of mention. Accounts of incrementality in sentence production rely heavily on analyses of this gaze-speech link. To identify systematic sources of variability in message and sentence formulation, two experiments evaluated differences in formulation for sentences describing “easy” and “hard” events (more codable and less codable events) with preferred and dispreferred structures (actives and passives). Experiment 1 employed a subliminal cuing manipulation and a cumulative priming manipulation to increase production of passive sentences. Experiment 2 examined the influence of event codability on formulation without a cuing manipulation. In both experiments, speakers showed an early preference for looking at the agent of the event when constructing active sentences. This preference was attenuated by event codability, suggesting that speakers were less likely to prioritize encoding of a single character at the outset of formulation in “easy” events than in “harder” events. Accessibility of the agent influenced formulation primarily when an event was “harder” to describe. Formulation of passive sentences in Experiment 1 also began with early fixations to the agent but changed with exposure to passive syntax: speakers were more likely to consider the patient as a suitable sentential starting point after cumulative priming. The results show that the message-to-language mapping in production can vary with the ease of encoding an event structure and of generating a suitable linguistic structure.
  • Van de Velde, M., & Meyer, A. S. (2014). Syntactic flexibility and planning scope: The effect of verb bias on advance planning during sentence recall. Frontiers in Psychology, 5: 1174. doi:10.3389/fpsyg.2014.01174.

    Abstract

    In sentence production, grammatical advance planning scope depends on contextual factors (e.g., time pressure), linguistic factors (e.g., ease of structural processing), and cognitive factors (e.g., production speed). The present study tests the influence of the availability of multiple syntactic alternatives (i.e., syntactic flexibility) on the scope of advance planning during the recall of Dutch dative phrases. We manipulated syntactic flexibility by using verbs with a strong bias or a weak bias toward one structural alternative in sentence frames accepting both verbs (e.g., strong/weak bias: De ober schotelt/serveert de klant de maaltijd [voor] “The waiter dishes out/serves the customer the meal”). To assess lexical planning scope, we varied the frequency of the first post-verbal noun (N1, Experiment 1) or the second post-verbal noun (N2, Experiment 2). In each experiment, 36 speakers produced the verb phrases in a rapid serial visual presentation (RSVP) paradigm. On each trial, they read a sentence presented one word at a time, performed a short distractor task, and then saw a sentence preamble (e.g., De ober…) which they had to complete to form the presented sentence. Onset latencies were compared using linear mixed effects models. N1 frequency did not produce any effects. N2 frequency only affected sentence onsets in the weak verb bias condition and especially in slow speakers. These findings highlight the dependency of planning scope during sentence recall on the grammatical properties of the verb and the frequency of post-verbal nouns. Implications for utterance planning in everyday speech are discussed.
  • Veenstra, A., Acheson, D. J., Bock, K., & Meyer, A. S. (2014). Effects of semantic integration on subject–verb agreement: Evidence from Dutch. Language, Cognition and Neuroscience, 29(3), 355-380. doi:10.1080/01690965.2013.862284.

    Abstract

    The generation of subject–verb agreement is a central component of grammatical encoding. It is sensitive to conceptual and grammatical influences, but the interplay between these factors is still not fully understood. We investigate how semantic integration of the subject noun phrase (‘the secretary of/with the governor’) and the Local Noun Number (‘the secretary with the governor/governors’) affect the ease of selecting the verb form. Two hypotheses are assessed: according to the notional hypothesis, integration encourages the assignment of the singular notional number to the noun phrase and facilitates the choice of the singular verb form. According to the lexical interference hypothesis, integration strengthens the competition between nouns within the subject phrase, making it harder to select the verb form when the nouns mismatch in number. In two experiments, adult speakers of Dutch completed spoken preambles (Experiment 1) or selected appropriate verb forms (Experiment 2). Results showed facilitatory effects of semantic integration (fewer errors and faster responses with increasing integration). These effects did not interact with the effects of the Local Noun Number (slower response times and higher error rates for mismatching than for matching noun numbers). The findings thus support the notional hypothesis and a model of agreement where conceptual and lexical factors independently contribute to the determination of the number of the subject noun phrase and, ultimately, the verb.
  • Veenstra, A., Acheson, D. J., & Meyer, A. S. (2014). Keeping it simple: Studying grammatical encoding with lexically-reduced item sets. Frontiers in Psychology, 5: 783. doi:10.3389/fpsyg.2014.00783.

    Abstract

    Compared to the large body of work on lexical access, little research has been done on grammatical encoding in language production. An exception is the generation of subject-verb agreement. Here, two key findings have been reported: (1) Speakers make more agreement errors when the head and local noun of a phrase mismatch in number than when they match (e.g., the key to the cabinet(s)); and (2) this attraction effect is asymmetric, with stronger attraction for singular than for plural head nouns. Although these findings are robust, the cognitive processes leading to agreement errors and their significance for the generation of correct agreement are not fully understood. We propose that future studies of agreement, and grammatical encoding in general, may benefit from using paradigms that tightly control the variability of the lexical content of the material. We report two experiments illustrating this approach. In both of them, the experimental items featured combinations of four nouns, four color adjectives, and two prepositions. In Experiment 1, native speakers of Dutch described pictures in sentences such as the circle next to the stars is blue. In Experiment 2, they carried out a forced-choice task, where they read subject noun phrases (e.g., the circle next to the stars) and selected the correct verb-phrase (is blue or are blue) with a button press. Both experiments showed an attraction effect, with more errors after subject phrases with mismatching, compared to matching head and local nouns. This effect was stronger for singular than plural heads, replicating the attraction asymmetry. In contrast, the response times recorded in Experiment 2 showed similar attraction effects for singular and plural head nouns. These results demonstrate that critical agreement phenomena can be elicited reliably in lexically-reduced contexts. We discuss the theoretical implications of the findings and the potential and limitations of studies using lexically simple materials.
  • Veenstra, A. (2014). Semantic and syntactic constraints on the production of subject-verb agreement. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Yang, A., & Chen, A. (2014). Prosodic focus marking in child and adult Mandarin Chinese. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 54-58).

    Abstract

    This study investigates how Mandarin Chinese speaking children and adults use prosody to mark focus in spontaneous speech. SVO sentences were elicited from 4- and 8-year-olds and adults in a game setting. Sentence-medial verbs were acoustically analysed for both duration and pitch range in different focus conditions. We have found that like the adults, the 8-year-olds used both duration and pitch range to distinguish focus from non-focus. The 4-year-olds used only duration to distinguish focus from non-focus, unlike the adults and 8-year-olds. None of the three groups of speakers distinguished contrastive focus from non-contrastive focus using pitch range or duration. Regarding the distinction between narrow focus from broad focus, the 4- and 8-year-olds used both pitch range and duration for this purpose, while the adults used only duration
  • Yang, A., & Chen, A. (2014). Prosodic focus-marking in Chinese four- and eight-year-olds. In N. Campbell, D. Gibbon, & D. Hirst (Eds.), Proceedings of Speech Prosody 2014 (pp. 713-717).

    Abstract

    This study investigates how Mandarin Chinese speaking children use prosody to distinguish focus from non-focus, and focus types differing in size of constituent and contrastivity. SVO sentences were elicited from four- and eight-year-olds in a game setting. Sentence-medial verbs were acoustically analysed for both duration and pitch range in different focus conditions. The children started to use duration to differentiate focus from non-focus at the age of four. But their use of pitch range varied with age and depended on non-focus conditions (pre- vs. postfocus) and the lexical tones of the verbs. Further, the children in both age groups used pitch range but not duration to differentiate narrow focus from broad focus, and they did not differentiate contrastive narrow focus from non-contrastive narrow focus using duration or pitch range. The results indicated that Chinese children acquire the prosodic means (duration and pitch range) of marking focus in stages, and their acquisition of these two means appear to be early, compared to children speaking an intonation language, for example, Dutch.
  • Roelofs, A., Meyer, A. S., & Levelt, W. J. M. (1998). A case for the lemma/lexeme distinction in models of speaking: Comment on Caramazza and Miozzo (1997). Cognition, 69(2), 219-230. doi:10.1016/S0010-0277(98)00056-0.

    Abstract

    In a recent series of papers, Caramazza and Miozzo [Caramazza, A., 1997. How many levels of processing are there in lexical access? Cognitive Neuropsychology 14, 177-208; Caramazza, A., Miozzo, M., 1997. The relation between syntactic and phonological knowledge in lexical access: evidence from the 'tip-of-the-tongue' phenomenon. Cognition 64, 309-343; Miozzo, M., Caramazza, A., 1997. On knowing the auxiliary of a verb that cannot be named: evidence for the independence of grammatical and phonological aspects of lexical knowledge. Journal of Cognitive Neuropsychology 9, 160-166] argued against the lemma/lexeme distinction made in many models of lexical access in speaking, including our network model [Roelofs, A., 1992. A spreading-activation theory of lemma retrieval in speaking. Cognition 42, 107-142; Levelt, W.J.M., Roelofs, A., Meyer, A.S., 1998. A theory of lexical access in speech production. Behavioral and Brain Sciences, (in press)]. Their case was based on the observations that grammatical class deficits of brain-damaged patients and semantic errors may be restricted to either spoken or written forms and that the grammatical gender of a word and information about its form can be independently available in tip-of-the-tongue stales (TOTs). In this paper, we argue that though our model is about speaking, not taking position on writing, extensions to writing are possible that are compatible with the evidence from aphasia and speech errors. Furthermore, our model does not predict a dependency between gender and form retrieval in TOTs. Finally, we argue that Caramazza and Miozzo have not accounted for important parts of the evidence motivating the lemma/lexeme distinction, such as word frequency effects in homophone production, the strict ordering of gender and pho neme access in LRP data, and the chronometric and speech error evidence for the production of complex morphology.

Share this page