Displaying 1 - 20 of 40
  • Barthel, M., Sauppe, S., Levinson, S. C., & Meyer, A. S. (2016). The timing of utterance planning in task-oriented dialogue: Evidence from a novel list-completion paradigm. Frontiers in Psychology, 7: 1858. doi:10.3389/fpsyg.2016.01858.

    Abstract

    In conversation, interlocutors rarely leave long gaps between turns, suggesting that next speak- ers begin to plan their turns while listening to the previous speaker. The present experiment used analyses of speech onset latencies and eye-movements in a task-oriented dialogue paradigm to investigate when speakers start planning their response. Adult German participants heard a confederate describe sets of objects in utterances that either ended in a noun (e.g. Ich habe eine Tür und ein Fahrrad (‘I have a door and a bicycle’)) or a verb form (Ich habe eine Tür und ein Fahrrad besorgt (‘I have gotten a door and a bicycle’)), while the presence or absence of the final verb either was or was not predictable from the preceding sentence structure. In response, participants had to name any unnamed objects they could see in their own display in utterances such as Ich habe ein Ei (‘I have an egg’). The main question was when participants started to plan their response. The results are consistent with the view that speakers begin to plan their turn as soon as sufficient information is available to do so, irrespective of further incoming words.
  • Bobb, S., Huettig, F., & Mani, N. (2016). Predicting visual information during sentence processing: Toddlers activate an object's shape before it is mentioned. Journal of Experimental Child Psychology, 151, 51-64. doi:10.1016/j.jecp.2015.11.002.

    Abstract

    We examined the contents of language-mediated prediction in toddlers by investigating the extent to which toddlers are sensitive to visual-shape representations of upcoming words. Previous studies with adults suggest limits to the degree to which information about the visual form of a referent is predicted during language comprehension in low constraint sentences. 30-month-old toddlers heard either contextually constraining sentences or contextually neutral sentences as they viewed images that were either identical or shape related to the heard target label. We observed that toddlers activate shape information of upcoming linguistic input in contextually constraining semantic contexts: Hearing a sentence context that was predictive of the target word activated perceptual information that subsequently influenced visual attention toward shape-related targets. Our findings suggest that visual shape is central to predictive language processing in toddlers.
  • Bosker, H. R., Reinisch, E., & Sjerps, M. J. (2016). Listening under cognitive load makes speech sound fast. In H. van den Heuvel, B. Cranen, & S. Mattys (Eds.), Proceedings of the Speech Processing in Realistic Environments [SPIRE] Workshop (pp. 23-24). Groningen.
  • Bosker, H. R. (2016). Our own speech rate influences speech perception. In J. Barnes, A. Brugos, S. Stattuck-Hufnagel, & N. Veilleux (Eds.), Proceedings of Speech Prosody 2016 (pp. 227-231).

    Abstract

    During conversation, spoken utterances occur in rich acoustic contexts, including speech produced by our interlocutor(s) and speech we produced ourselves. Prosodic characteristics of the acoustic context have been known to influence speech perception in a contrastive fashion: for instance, a vowel presented in a fast context is perceived to have a longer duration than the same vowel in a slow context. Given the ubiquity of the sound of our own voice, it may be that our own speech rate - a common source of acoustic context - also influences our perception of the speech of others. Two experiments were designed to test this hypothesis. Experiment 1 replicated earlier contextual rate effects by showing that hearing pre-recorded fast or slow context sentences alters the perception of ambiguous Dutch target words. Experiment 2 then extended this finding by showing that talking at a fast or slow rate prior to the presentation of the target words also altered the perception of those words. These results suggest that between-talker variation in speech rate production may induce between-talker variation in speech perception, thus potentially explaining why interlocutors tend to converge on speech rate in dialogue settings.

    Supplementary material

    pdf via conference website227
  • Broersma, M., Carter, D., & Acheson, D. J. (2016). Cognate costs in bilingual speech production: Evidence from language switching. Frontiers in Psychology, 7: 1461. doi:10.3389/fpsyg.2016.01461.

    Abstract

    This study investigates cross-language lexical competition in the bilingual mental lexicon. It provides evidence for the occurrence of inhibition as well as the commonly reported facilitation during the production of cognates (words with similar phonological form and meaning in two languages) in a mixed picture naming task by highly proficient Welsh-English bilinguals. Previous studies have typically found cognate facilitation. It has previously been proposed (with respect to non-cognates) that cross-language inhibition is limited to low-proficient bilinguals; therefore, we tested highly proficient, early bilinguals. In a mixed naming experiment (i.e., picture naming with language switching), 48 highly proficient, early Welsh-English bilinguals named pictures in Welsh and English, including cognate and non-cognate targets. Participants were English-dominant, Welsh-dominant, or had equal language dominance. The results showed evidence for cognate inhibition in two ways. First, both facilitation and inhibition were found on the cognate trials themselves, compared to non-cognate controls, modulated by the participants' language dominance. The English-dominant group showed cognate inhibition when naming in Welsh (and no difference between cognates and controls when naming in English), and the Welsh-dominant and equal dominance groups generally showed cognate facilitation. Second, cognate inhibition was found as a behavioral adaptation effect, with slower naming for non-cognate filler words in trials after cognates than after non-cognate controls. This effect was consistent across all language dominance groups and both target languages, suggesting that cognate production involved cognitive control even if this was not measurable in the cognate trials themselves. Finally, the results replicated patterns of symmetrical switch costs, as commonly reported for balanced bilinguals. We propose that cognate processing might be affected by two different processes, namely competition at the lexical-semantic level and facilitation at the word form level, and that facilitation at the word form level might (sometimes) outweigh any effects of inhibition at the lemma level. In sum, this study provides evidence that cognate naming can cause costs in addition to benefits. The finding of cognate inhibition, particularly for the highly proficient bilinguals tested, provides strong evidence for the occurrence of lexical competition across languages in the bilingual mental lexicon.
  • Diaz, B., Mitterer, H., Broersma, M., Escara, C., & Sebastián-Gallés, N. (2016). Variability in L2 phonemic learning originates from speech-specific capabilities: An MMN study on late bilinguals. Bilingualism: Language and Cognition, 19(5), 955-970. doi:10.1017/S1366728915000450.

    Abstract

    People differ in their ability to perceive second language (L2) sounds. In early bilinguals the variability in learning L2 phonemes stems from speech-specific capabilities (Díaz, Baus, Escera, Costa & Sebastián-Gallés, 2008). The present study addresses whether speech-specific capabilities similarly explain variability in late bilinguals. Event-related potentials were recorded (using a design similar to Díaz et al., 2008) in two groups of late Dutch–English bilinguals who were good or poor in overtly discriminating the L2 English vowels /ε-æ/. The mismatch negativity, an index of discrimination sensitivity, was similar between the groups in conditions involving pure tones (of different length, frequency, and presentation order) but was attenuated in poor L2 perceivers for native, unknown, and L2 phonemes. These results suggest that variability in L2 phonemic learning originates from speech-specific capabilities and imply a continuity of L2 phonemic learning mechanisms throughout the lifespan
  • Dingemanse, M., Schuerman, W. L., Reinisch, E., Tufvesson, S., & Mitterer, H. (2016). What sound symbolism can and cannot do: Testing the iconicity of ideophones from five languages. Language, 92(2), e117-e133. doi:10.1353/lan.2016.0034.

    Abstract

    Sound symbolism is a phenomenon with broad relevance to the study of language and mind, but there has been a disconnect between its investigations in linguistics and psychology. This study tests the sound-symbolic potential of ideophones—words described as iconic—in an experimental task that improves over prior work in terms of ecological validity and experimental control. We presented 203 ideophones from five languages to eighty-two Dutch listeners in a binary-choice task, in four versions: original recording, full diphone resynthesis, segments-only resynthesis, and prosody-only resynthesis. Listeners guessed the meaning of all four versions above chance, confirming the iconicity of ideophones and showing the viability of speech synthesis as a way of controlling for segmental and suprasegmental properties in experimental studies of sound symbolism. The success rate was more modest than prior studies using pseudowords like bouba/kiki, implying that assumptions based on such words cannot simply be transferred to natural languages. Prosody and segments together drive the effect: neither alone is sufficient, showing that segments and prosody work together as cues supporting iconic interpretations. The findings cast doubt on attempts to ascribe iconic meanings to segments alone and support a view of ideophones as words that combine arbitrariness and iconicity.We discuss the implications for theory and methods in the empirical study of sound symbolism and iconicity.

    Supplementary material

    https://muse.jhu.edu/article/619540
  • Gannon, E., He, J., Gao, X., & Chaparro, B. (2016). RSVP Reading on a Smart Watch. In Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting (pp. 1130-1134).

    Abstract

    Reading with Rapid Serial Visual Presentation (RSVP) has shown promise for optimizing screen space and increasing reading speed without compromising comprehension. Given the wide use of small-screen devices, the present study compared RSVP and traditional reading on three types of reading comprehension, reading speed, and subjective measures on a smart watch. Results confirm previous studies that show faster reading speed with RSVP without detracting from comprehension. Subjective data indicate that Traditional is strongly preferred to RSVP as a primary reading method. Given the optimal use of screen space, increased speed and comparable comprehension, future studies should focus on making RSVP a more comfortable format.
  • Gibson, M., & Bosker, H. R. (2016). Over vloeiendheid in spraak. Tijdschrift Taal, 7(10), 40-45.
  • Gordon, P. C., & Hoedemaker, R. S. (2016). Effective scheduling of looking and talking during rapid automatized naming. Journal of Experimental Psychology: Human Perception and Performance, 42(5), 742-760. doi:10.1037/xhp0000171.

    Abstract

    Rapid automatized naming (RAN) is strongly related to literacy gains in developing readers, reading disabilities, and reading ability in children and adults. Because successful RAN performance depends on the close coordination of a number of abilities, it is unclear what specific skills drive this RAN-reading relationship. The current study used concurrent recordings of young adult participants' vocalizations and eye movements during the RAN task to assess how individual variation in RAN performance depends on the coordination of visual and vocal processes. Results showed that fast RAN times are facilitated by having the eyes 1 or more items ahead of the current vocalization, as long as the eyes do not get so far ahead of the voice as to require a regressive eye movement to an earlier item. These data suggest that optimizing RAN performance is a problem of scheduling eye movements and vocalization given memory constraints and the efficiency of encoding and articulatory control. Both RAN completion time (conventionally used to indicate RAN performance) and eye-voice relations predicted some aspects of participants' eye movements on a separate sentence reading task. However, eye-voice relations predicted additional features of first-pass reading that were not predicted by RAN completion time. This shows that measurement of eye-voice patterns can identify important aspects of individual variation in reading that are not identified by the standard measure of RAN performance. We argue that RAN performance predicts reading ability because both tasks entail challenges of scheduling cognitive and linguistic processes that operate simultaneously on multiple linguistic inputs

    Files private

    Request files
  • Gordon, P. C., Lowder, M. W., & Hoedemaker, R. S. (2016). Reading in normally aging adults. In H. Wright (Ed.), Cognitive-Linguistic Processes and Aging (pp. 165-192). Amsterdam: Benjamins. doi:10.1075/z.200.07gor.

    Abstract

    The activity of reading raises fundamental theoretical and practical questions about healthy cognitive aging. Reading relies greatly on knowledge of patterns of language and of meaning at the level of words and topics of text. Further, this knowledge must be rapidly accessed so that it can be coordinated with processes of perception, attention, memory and motor control that sustain skilled reading at rates of four-to-five words a second. As such, reading depends both on crystallized semantic intelligence which grows or is maintained through healthy aging, and on components of fluid intelligence which decline with age. Reading is important to older adults because it facilitates completion of everyday tasks that are essential to independent living. In addition, it entails the kind of active mental engagement that can preserve and deepen the cognitive reserve that may mitigate the negative consequences of age-related changes in the brain. This chapter reviews research on the front end of reading (word recognition) and on the back end of reading (text memory) because both of these abilities are surprisingly robust to declines associated with cognitive aging. For word recognition, that robustness is surprising because rapid processing of the sort found in reading is usually impaired by aging; for text memory, it is surprising because other types of episodic memory performance (e.g., paired associates) substantially decline in aging. These two otherwise quite different levels of reading comprehension remain robust because they draw on the knowledge of language that older adults gain through a life-time of experience with language.
  • De Groot, F., Koelewijn, T., Huettig, F., & Olivers, C. N. L. (2016). A stimulus set of words and pictures matched for visual and semantic similarity. Journal of Cognitive Psychology, 28(1), 1-15. doi:10.1080/20445911.2015.1101119.

    Abstract

    Researchers in different fields of psychology have been interested in how vision and language interact, and what type of representations are involved in such interactions. We introduce a stimulus set that facilitates such research (available online). The set consists of 100 words each of which is paired with four pictures of objects: One semantically similar object (but visually dissimilar), one visually similar object (but semantically dissimilar), and two unrelated objects. Visual and semantic similarity ratings between corresponding items are provided for every picture for Dutch and for English. In addition, visual and linguistic parameters of each picture are reported. We thus present a stimulus set from which researchers can select, on the basis of various parameters, the items most optimal for their research question.

    Files private

    Request files
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2016). Revisiting the looking at nothing phenomenon: Visual and semantic biases in memory search. Visual Cognition, 24, 226-245. doi:10.1080/13506285.2016.1221013.

    Abstract

    When visual stimuli remain present during search, people spend more time fixating objects that are semantically or visually related to the target instruction than fixating unrelated objects. Are these semantic and visual biases also observable when participants search within memory? We removed the visual display prior to search while continuously measuring eye movements towards locations previously occupied by objects. The target absent trials contained objects that were either visually or semantically related to the target instruction. When the overall mean proportion of fixation time was considered, we found biases towards the location previously occupied by the target, but failed to find biases towards visually or semantically related objects. However, in two experiments, the pattern of biases towards the target over time provided a reliable predictor for biases towards the visually and semantically related objects. We therefore conclude that visual and semantic representations alone can guide eye movements in memory search, but that orienting biases are weak when the stimuli are no longer present.
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2016). When meaning matters: The temporal dynamics of semantic influences on visual attention. Journal of Experimental Psychology: Human Perception and Performance, 42(2), 180-196. doi:10.1037/xhp0000102.

    Abstract

    An important question is to what extent visual attention is driven by the semantics of individual objects, rather than by their visual appearance. This study investigates the hypothesis that timing is a crucial factor in the occurrence and strength of semantic influences on visual orienting. To assess the dynamics of such influences, the target instruction was presented either before or after visual stimulus onset, while eye movements were continuously recorded throughout the search. The results show a substantial but delayed bias in orienting towards semantically related objects compared to visually related objects when target instruction is presented before visual stimulus onset. However, this delay can be completely undone by presenting the visual information before the target instruction (Experiment 1). Moreover, the absence or presence of visual competition does not change the temporal dynamics of the semantic bias (Experiment 2). Visual orienting is thus driven by priority settings that dynamically shift between visual and semantic representations, with each of these types of bias operating largely independently. The findings bridge the divide between the visual attention and the psycholinguistic literature.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2016). Encouraging prediction during production facilitates subsequent comprehension: Evidence from interleaved object naming in sentence context and sentence reading. Quarterly Journal of Experimental Psychology, 69(6), 1056-1063. doi:10.1080/17470218.2015.1131309.

    Abstract

    Many studies have shown that a supportive context facilitates language comprehension. A currently influential view is that language production may support prediction in language comprehension. Experimental evidence for this, however, is relatively sparse. Here we explored whether encouraging prediction in a language production task encourages the use of predictive contexts in an interleaved comprehension task. In Experiment 1a, participants listened to the first part of a sentence and provided the final word by naming aloud a picture. The picture name was predictable or not predictable from the sentence context. Pictures were named faster when they could be predicted than when this was not the case. In Experiment 1b the same sentences, augmented by a final spill-over region, were presented in a self-paced reading task. No difference in reading times for predictive vs. non-predictive sentences was found. In Experiment 2, reading and naming trials were intermixed. In the naming task, the advantage for predictable picture names was replicated. More importantly, now reading times for the spill-over region were considerable faster for predictive vs. non-predictive sentences. We conjecture that these findings fit best with the notion that prediction in the service of language production encourages the use of predictive contexts in comprehension. Further research is required to identify the exact mechanisms by which production exerts its influence on comprehension.
  • Huettig, F., & Mani, N. (2016). Is prediction necessary to understand language? Probably not. Language, Cognition and Neuroscience, 31(1), 19-31. doi:10.1080/23273798.2015.1072223.

    Abstract

    Many psycholinguistic experiments suggest that prediction is an important characteristic of language processing. Some recent theoretical accounts in the cognitive sciences (e.g., Clark, 2013; Friston, 2010) and psycholinguistics (e.g., Dell & Chang, 2014) appear to suggest that prediction is even necessary to understand language. In the present opinion paper we evaluate this proposal. We first critically discuss several arguments that may appear to be in line with the notion that prediction is necessary for language processing. These arguments include that prediction provides a unified theoretical principle of the human mind and that it pervades cortical function. We discuss whether evidence of human abilities to detect statistical regularities is necessarily evidence for predictive processing and evaluate suggestions that prediction is necessary for language learning. Five arguments are then presented that question the claim that all language processing is predictive in nature. We point out that not all language users appear to predict language and that suboptimal input makes prediction often very challenging. Prediction, moreover, is strongly context-dependent and impeded by resource limitations. We also argue that it may be problematic that most experimental evidence for predictive language processing comes from 'prediction-encouraging' experimental set-ups. Finally, we discuss possible ways that may lead to a further resolution of this debate. We conclude that languages can be learned and understood in the absence of prediction. Claims that all language processing is predictive in nature are premature.
  • Huettig, F., & Janse, E. (2016). Individual differences in working memory and processing speed predict anticipatory spoken language processing in the visual world. Language, Cognition and Neuroscience, 31(1), 80-93. doi:10.1080/23273798.2015.1047459.

    Abstract

    It is now well established that anticipation of up-coming input is a key characteristic of spoken language comprehension. Several mechanisms of predictive language processing have been proposed. The possible influence of mediating factors such as working memory and processing speed however has hardly been explored. We sought to find evidence for such an influence using an individual differences approach. 105 participants from 32 to 77 years of age received spoken instructions (e.g., "Kijk naar deCOM afgebeelde pianoCOM" - look at the displayed piano) while viewing four objects. Articles (Dutch “het” or “de”) were gender-marked such that the article agreed in gender only with the target. Participants could thus use gender information from the article to predict the upcoming target object. The average participant anticipated the target objects well in advance of the critical noun. Multiple regression analyses showed that working memory and processing speed had the largest mediating effects: Enhanced working memory abilities and faster processing speed supported anticipatory spoken language processing. These findings suggest that models of predictive language processing must take mediating factors such as working memory and processing speed into account. More generally, our results are consistent with the notion that working memory grounds language in space and time, linking linguistic and visual-spatial representations.
  • Jiang, T., Zhang, W., Wen, W., Zhu, H., Du, H., Zhu, X., Gao, X., Zhang, H., Dong, Q., & Chen, C. (2016). Reevaluating the two-representation model of numerical magnitude processing. Memory & Cognition, 44, 162-170. doi:10.3758/s13421-015-0542-2.

    Abstract

    One debate in mathematical cognition centers on the single-representation model versus the two-representation model. Using an improved number Stroop paradigm (i.e., systematically manipulating physical size distance), in the present study we tested the predictions of the two models for number magnitude processing. The results supported the single-representation model and, more importantly, explained how a design problem (failure to manipulate physical size distance) and an analytical problem (failure to consider the interaction between congruity and task-irrelevant numerical distance) might have contributed to the evidence used to support the two-representation model. This study, therefore, can help settle the debate between the single-representation and two-representation models. © 2015 The Author(s)
  • Jongman, S. R. (2016). Sustained attention in language production. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Supplementary material

    Full Text (via Radboud)
  • Koch, X., & Janse, E. (2016). Speech rate effects on the processing of conversational speech across the adult life span. Journal of the Acoustical Society of America, 139(4), 1618-1636. doi:10.1121/1.4944032.

    Abstract

    This study investigates the effect of speech rate on spoken word recognition across the adult life span. Contrary to previous studies, conversational materials with a natural variation in speech rate were used rather than lab-recorded stimuli that are subsequently artificially time-compressed. It was investigated whether older adults' speech recognition is more adversely affected by increased speech rate compared to younger and middle-aged adults, and which individual listener characteristics (e.g., hearing, fluid cognitive processing ability) predict the size of the speech rate effect on recognition performance. In an eye-tracking experiment, participants indicated with a mouse-click which visually presented words they recognized in a conversational fragment. Click response times, gaze, and pupil size data were analyzed. As expected, click response times and gaze behavior were affected by speech rate, indicating that word recognition is more difficult if speech rate is faster. Contrary to earlier findings, increased speech rate affected the age groups to the same extent. Fluid cognitive processing ability predicted general recognition performance, but did not modulate the speech rate effect. These findings emphasize that earlier results of age by speech rate interactions mainly obtained with artificially speeded materials may not generalize to speech rate variation as encountered in conversational speech.

Share this page