Falk Huettig

Presentations

Displaying 1 - 50 of 50
  • Eisner, F., Kumar, U., Mishra, R. K., Nand Tripathi, V., Guleria, A., Singh, P., & Huettig, F. (2015). The effect of literacy acquisition on cortical and subcortical networks: A longitudinal approach. Talk presented at the 7th Annual Meeting of the Society for the Neurobiology of Language. Chicago, US. 2015-10-15 - 2015-10-17.

    Abstract

    How do human cultural inventions such as reading result in neural re-organization? Previous cross-sectional studies have reported extensive effects of literacy on the neural systems for vision and language (Dehaene et al [2010, Science], Castro-Caldas et al [1998, Brain], Petersson et al [1998, NeuroImage], Carreiras et al [2009, Nature]). In this first longitudinal study with completely illiterate participants, we measured brain responses to speech, text, and other categories of visual stimuli with fMRI before and after a group of illiterate participants in India completed a literacy training program in which they learned to read and write Devanagari script. A literate and an illiterate no-training control group were matched to the training group in terms of socioeconomic background and were recruited from the same societal community in two villages of a rural area near Lucknow, India. This design permitted investigating effects of literacy cross-sectionally across groups before training (N=86) as well as longitudinally (training group N=25). The two analysis approaches yielded converging results: Literacy was associated with enhanced, mainly left-lateralized responses to written text along the ventral stream (including lingual gyrus, fusiform gyrus, and parahippocampal gyrus), dorsal stream (intraparietal sulcus), and (pre-) motor systems (pre-central sulcus, supplementary motor area), thalamus (pulvinar), and cerebellum. Significantly reduced responses were observed bilaterally in the superior parietal lobe (precuneus) and in the right angular gyrus. These positive effects corroborate and extend previous findings from cross-sectional studies. However, effects of literacy were specific to written text and (to a lesser extent) to false fonts. Contrary to previous research, we found no direct evidence of literacy affecting the processing of other types of visual stimuli such as faces, tools, houses, and checkerboards. Furthermore, unlike in some previous studies, we did not find any evidence for effects of literacy on responses in the auditory cortex in our Hindi-speaking participants. We conclude that learning to read has a specific and extensive effect on the processing of written text along the visual pathways, including low-level thalamic nuclei, high-level systems in the intraparietal sulcus and the fusiform gyrus, and motor areas. The absence of an effect of literacy on responses in the auditory cortex in particular raises questions about the extent to which phonological representations in the auditory cortex are altered by literacy acquisition or recruited online during reading.
  • de Groot, F., Huettig, F., & Olivers, C. N. (2015). Semantic influences on visual attention. Talk presented at the 15th NVP Winter Conference. Egmond aan Zee, The Netherlands. 2015-12-17 - 2015-12-19.

    Abstract

    To what extent is visual attention driven by the semantics of individual objects, rather than by their visual appearance? To investigate this we continuously measured eye movements, while observers searched through displays of common objects for an aurally instructed target. On crucial trials, the target was absent, but the display contained object s that were either semantically or visually related to the target. We hypothesized that timing is crucial in the occurrence and strength of semantic influences on visual orienting, and therefore presented the target instruction either before, during, or af ter (memory - based search) picture onset. When the target instruction was presented before picture onset we found a substantial, but delayed bias in orienting towards semantically related objects as compared to visually related objects. However, this delay disappeared when the visual information was presented before the target instruction. Furthermore, the temporal dynamics of the semantic bias did not change in the absence of visual competition. These results po int to cascadic but independent influences of semantic and visual representations on attention. In addition. the results of the memory - based search studies suggest that visual and semantic biases only arise when the visual stimuli are present. Although we consistent ly found that people fixate at locat ions previously occupied by the target object (a replication of earlier findings), we did not find such biases for visually or semantically related objects. Overall, our studies show that the question whether visual orienting is driven by semantic c ontent is better rephrased as when visual orienting is driven by semantic content.
  • De Groot, F., Huettig, F., & Olivers, C. (2015). When meaning matters: The temporal dynamics of semantic influences on visual attention. Poster presented at the Psychonomic Society's 56th Annual Meeting, Chicago, USA.
  • de Groot, F., Huettig, F., & Olivers, C. (2015). When meaning matters: The temporal dynamics of semantic influences on visual attention. Talk presented at the 23rd Annual Workshop on Object Perception, Attention, and Memory. Chigaco, USA. 2015-10-19.
  • de Groot, F., Huettig, F., & Olivers, C. (2015). When meaning matters: The temporal dynamics of semantic influences on visual attention. Poster presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015), Paphos, Cyprus.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2015). Context-dependent employment of mechanisms in anticipatory language processing. Talk presented at the 15th NVP Winter Conference. Egmond aan Zee, The Netherlands. 2015-12-17 - 2015-12-19.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2015). Doing a production task encourages prediction: Evidence from interleaved object naming and sentence reading. Poster presented at the 28th Annual CUNY Conference on Human Sentence Processing, Los Angeles (CA, USA).

    Abstract

    Prominent theories of predictive language processing assume that language production processes are used to anticipate upcoming linguistic input during comprehension (Dell & Chang, 2014; Pickering & Garrod, 2013). Here, we explored the converse case: Does a task set including production in addition to comprehension encourage prediction, compared to a task only including comprehension? To test this hypothesis, participants carried out a cross-modal naming task (Exp 1a), a self-paced reading task (Exp1 b) that did not include overt production, and a task (Exp 1c) in which naming and reading trials were evenly interleaved. We used the same predictable (N = 40) and non-predictable (N = 40) sentences in all three tasks. The sentences consisted of a fixed agent, a transitive verb and a predictable or non-predictable target word (The man breaks a glass vs. The man borrows a glass). The mean cloze probability in the predictable sentences was .39 (ranging from .06 to .8; zero in the non-predictable sentences). A total of 162 volunteers took part in the experiment which was run in a between-participants design. In Exp 1a, fifty-four participants listened to recordings of the sentences which ended right before the spoken target word. Coinciding with the end of the playback, a picture of the target word was shown which the participants were asked to name as fast as possible. Analyses of their naming latencies revealed a statistically significant naming advantage of 108 ms on predictable over non-predictable trials. Moreover, we found that the objects’ naming advantage was predicted by the target words’ cloze probability in the sentences (r = .347, p = .038). In Exp 1b, 54 participants were asked to read the same sentences in a self-paced fashion. To allow for testing of potential spillover effects, we added a neutral prepositional phrase (breaks a glass from the collection/borrows a glass from the neighbor) to each sentence. The sentences were read word-by-word, advancing by pushing the space bar. On 30% of the trials, comprehension questions were used to keep up participants' focus on comprehending the sentences. Analyses of their spillover region reading times revealed a numerical advantage (8 ms; tspillover = -1.1, n.s.) in the predictable as compared to the non-predictable condition. Importantly, the analysis of participants' responses to the comprehension questions, showed that they understood the sentences (mean accuracy = 93%). In Exp 1c, the task comprised 50% naming trials and 50% reading trials which appeared in random order. Fifty-four participants named and read the same objects and sentences as in the previous versions. The results showed a naming advantage on predictable over non-predictable items (99 ms) and a positive correlation between the items’ cloze probability and their naming advantage (r = .322, p = .055). Crucially, the post-target reading time analysis showed that with naming trials and reading trials interleaved, there was also a statistically reliable prediction effect on reading trials. Participants were 19 ms faster at reading the spillover region on predictable relative to non-predictable items (tspillover = -2.624). To summarize, although we used the same sentences in all sub-experiments, we observed effects of prediction only when the task set involved production. In the reading only experiment (Exp 1b), no evidence for anticipation was obtained although participants clearly understood the sentences and the same sentences yielded reading facilitation when interleaved with naming trials (Exp 1c). This suggests that predictive language processing can be modulated by the comprehenders’ task set. When the task set involves language production, as is often the case in natural conversation, comprehenders appear to engage in prediction to a stronger degree than in pure comprehension tasks. In our discussion, we will discuss the notion that language production may engage prediction, because being able to predict words another person is about to say might optimize the comprehension process and enable smooth turn-taking.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2015). Event knowledge and word associations jointly influence predictive processing during discourse comprehension. Poster presented at the 28th Annual CUNY Conference on Human Sentence Processing, Los Angeles (CA, USA).

    Abstract

    A substantial body of literature has shown that readers and listeners often anticipate information. An open question concerns the mechanisms underlying predictive language processing. Multiple mechanisms have been suggested. One proposal is that comprehenders use event knowledge to predict upcoming words. Other theoretical frameworks propose that predictions are made based on simple word associations. In a recent EEG study, Metusalem and colleagues reported evidence for the modulating influence of event knowledge on prediction. They examined the degree to which event knowledge is activated during sentence comprehension. Their participants read two sentences, establishing an event scenario, which were followed by a final sentence containing one of three target words: a highly expected word, a semantically unexpected word that was related to the described event, or a semantically unexpected and event-unrelated word (see Figure, for an example). Analyses of participants’ ERPs elicited by the target words revealed a three-way split with regard to the amplitude of the N400 elicited by the different types of target: the expected targets elicited the smallest N400, the unexpected and event-unrelated targets elicited the largest N400. Importantly, the amplitude of the N400 elicited by the unexpected but event-related targets was significantly attenuated relative to the amplitude of the N400 elicited by the unexpected and event-unrelated targets. Metusalem et al. concluded that event knowledge is immediately available to constrain on-line language processing. Based on a post-hoc analysis, the authors rejected the possibility that the results could be explained by simple word associations. In the present study, we addressed the role of simple word associations in discourse comprehension more directly. Specifically, we explored the contribution of associative priming to the graded N400 pattern seen in Metusalem et al’s study. We conducted two EEG experiments. In Experiment 1, we reran Metusalem and colleagues’ context manipulation and closely replicated their results. In Experiment 2, we selected two words from the event-establishing sentences which were most strongly associated with the unexpected but event-related targets in the final sentences. Each of the two associates was then placed in a neutral carrier sentence. We controlled that none of the other words in these carrier sentences was associatively related to the target words. Importantly, the two carrier sentences did not build up a coherent event. We recorded EEG while participants read the carrier sentences followed by the same final sentences as in Experiment 1. The results showed that as in Experiment 1 the amplitude of the N400 elicited by both types of unexpected target words was larger than the N400 elicited by the highly expected target. Moreover, we found a global tendency towards the critical difference between event-related and event-unrelated unexpected targets which reached statistical significance only at parietal electrodes over the right hemisphere. Because the difference between event-related and event-unrelated conditions was larger when the sentences formed a coherent event compared to when they did not, our results suggest that associative priming alone cannot account for the N400 pattern observed in our Experiment 1 (and in the study by Metusalem et al.). However, because part of the effect remained, probably due to associative facilitation, the findings demonstrate that during discourse reading both event knowledge activation and simple word associations jointly contribute to the prediction process. The results highlight that multiple mechanisms underlie predictive language processing.
  • Huettig, F. (2015). Cause or effect? What commonalities between illiterates and individuals with dyslexia can tell us about dyslexia. Talk presented at the Reading in the Forest workshop. Annweiler, Germany. 2015-10-26 - 2015-10-28.

    Abstract

    I will discuss recent research with illiterates and individuals with dyslexia which suggests that many cognitive ‚defi ciencies‘ proposed as possible causes of dyslexia are simply a consequence of decreased reading experience. I will argue that in order to make further progress towards an understanding of the causes of dyslexia it is necessary to appropriately distinguish between cause and effect.
  • Huettig, F. (2015). Effekte der Literalität auf die Kognition. Talk presented at Die Abschlußtagung des Verbundprojekts Alpha plus Job. Bamberg, Germany. 2015-01.
  • Huettig, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Psychology Department, University of York. York, UK. 2015-11.
  • Huettig, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Individual differences in language processing across the adult life span workshop. Nijmegen, The Netherlands. 2015-12-10 - 2015-12-11.
  • Huettig, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Psychology Department, University of Glasgow. Glasgow, Scotland. 2015-11.
  • Huettig, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Psychology Department, University of Leeds. Leeds, UK. 2015-11.
  • Huettig, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Psychology Department, University of Edinburgh. Edinburgh, Scotland. 2015-09.
  • Huettig, F., Kumar, U., Mishra, R. K., Tripathi, V., Guleria, A., Prakash Singh, J., & Eisner, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015). Paphos, Cyprus. 2015-09-17 - 2015-09-20.

    Abstract

    How do human cultural inventions such as reading result in neural re-organization? In this first longitudinal study with young completely illiterate adult participants, we measured brain responses to speech, text, and other categories of visual stimuli with fMRI before and after a group of illiterate participants in India completed a literacy training program in which they learned to read and write Devanagari script. A literate and an illiterate no-training control group were matched to the training group in terms of socioeconomic background and were recruited from the same societal community in two villages of a rural area near Lucknow, India. This design permitted investigating effects of literacy cross-sectionally across groups before training (N=86) as well as longitudinally (training group N=25). The two analysis approaches yielded converging results: Literacy was associated with enhanced, left-lateralized responses to written text along the ventral stream (including lingual gyrus, fusiform gyrus, and parahippocampal gyrus), dorsal stream (intraparietal sulcus), and (pre-) motor systems (pre-central sulcus, supplementary motor area) and thalamus (pulvinar). Significantly reduced responses were observed bilaterally in the superior parietal lobe (precuneus) and in the right angular gyrus. These effects corroborate and extend previous findings from cross-sectional studies. However, effects of literacy were specific to written text and (to a lesser extent) to false fonts. We did not find any evidence for effects of literacy on responses in the auditory cortex in our Hindi-speaking participants. This raises questions about the extent to which phonological representations are altered by literacy acquisition.
  • Huettig, F., Kumar, U., Mishra, R. K., Tripathi, V., Guleria, A., Prakash Singh, J., & Eisner, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the 21st Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2015). Valetta, Malta. 2015-09-03 - 2015-09-05.

    Abstract

    How do human cultural inventions such as reading result in neural re-organization? In this first longitudinal study with young completely illiterate adult participants, we measured brain responses to speech, text, and other categories of visual stimuli with fMRI before and after a group of illiterate participants in India completed a literacy training program in which they learned to read and write Devanagari script. A literate and an illiterate no-training control group were matched to the training group in terms of socioeconomic background and were recruited from the same societal community in two villages of a rural area near Lucknow, India. This design permitted investigating effects of literacy cross-sectionally across groups before training (N=86) as well as longitudinally (training group N=25). The two analysis approaches yielded converging results: Literacy was associated with enhanced, left-lateralized responses to written text along the ventral stream (including lingual gyrus, fusiform gyrus, and parahippocampal gyrus), dorsal stream (intraparietal sulcus), and (pre-) motor systems (pre-central sulcus, supplementary motor area) and thalamus (pulvinar). Significantly reduced responses were observed bilaterally in the superior parietal lobe (precuneus) and in the right angular gyrus. These effects corroborate and extend previous findings from cross-sectional studies. However, effects of literacy were specific to written text and (to a lesser extent) to false fonts. We did not find any evidence for effects of literacy on responses in the auditory cortex in our Hindi-speaking participants. This raises questions about the extent to which phonological representations are altered by literacy acquisition.
  • Huettig, F., & Guerra, E. (2015). Testing the limits of prediction in language processing: Prediction occurs but far from always. Poster presented at the 21st Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2015), Valetta, Malta.
  • Mani, N., Daum, M., & Huettig, F. (2015). “Pro-active” in Many Ways: Evidence for Multiple Mechanisms in Prediction. Talk presented at the Biennial Meeting of the Society for Research in Child Development (SRCD 2015). Philadelphia, Pennsylvania, USA. 2015-03-19 - 2015-03-21.
  • Ostarek, M., & Huettig, F. (2015). Grounding language in the visual system: Visual noise interferes more with concrete than abstract word processing. Poster presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015), Paphos, Cyprus.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2016). The effects of orthographic transparency on the reading system: Insights from a computational model of reading development. Talk presented at the Experimental Psychology Society, London Meeting. London, U.K. 2016-01-06 - 2016-01-08.
  • Huettig, F. (2013). Anticipatory eye movements and predictive language processing. Talk presented at the ZiF research group on "Competition and Priority Control in Mind and Brain. Bielefeld, Germany. 2013-07.
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Literacy as a proxy for experience: Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Huettig, F., Mishra, R. K., Kumar, U., Singh, J. P., Guleria, A., & Tripathi, V. (2013). Phonemic and syllabic awareness of adult literates and illiterates in an Indian alphasyllabic language. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Talk presented at the 54th Annual Meeting of the Psychonomic Society. Toronto, Canada. 2013-11-14 - 2013-11-17.

    Abstract

    c
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Talk presented at the 11th International Symposium of Psycholinguistics. Tenerife, Spain. 2013-03-20 - 2013-03-23.

    Abstract

    c
  • Janse, E., Huettig, F., & Jesse, A. (2013). Working memory modulates the immediate use of context for recognizing words in sentences. Talk presented at the 5th Workshop on Speech in Noise: Intelligibility and Quality. Vitoria, Spain. 2013-01-10 - 2013-01-11.
  • Lai, V. T., & Huettig, F. (2013). When anticipation meets emotion: EEG evidence for distinct processing mechanisms. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Lai, V. T., & Huettig, F. (2013). When anticipation meets emotion: EEG evidence for distinct processing mechanisms. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language, San Diego, US.
  • Mani, N., & Huettig, F. (2013). Reading ability predicts anticipatory language processing in 8 year olds. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2013). Anticipating references to objects during sentence comprehension. Talk presented at the Experimental Psychology Society meeting (EPS). Bangor, UK. 2013-07-03 - 2013-07-05.
  • Rommers, J., Meyer, A. S., Piai, V., & Huettig, F. (2013). Constraining the involvement of language production in comprehension: A comparison of object naming and object viewing in sentence context. Talk presented at the 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013]. Marseille, France. 2013-09-02 - 2013-09-04.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Both phonological grain-size and general processing speed determine literacy related differences in language mediated eye gaze: Evidence from a connectionist model. Poster presented at The 18th Conference of the European Society for Cognitive Psychology [ESCOP 2013], Budapest, Hungary.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Semantic and visual competition eliminates the influence of rhyme overlap in spoken language processing. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013], Marseille, France.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Modelling the effect of literacy on multimodal interactions during spoken language processing in the visual world. Talk presented at Tagung experimentell arbeitender Psychologen. [TEAP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.

    Abstract

    Recent empirical evidence suggests that language-mediated eye gaze around the visual world varies across individuals and is partly determined by their level of formal literacy training. Huettig, Singh & Mishra (2011) showed that unlike high-literate individuals, whose eye gaze was closely time locked to phonological overlap between a spoken target word and items presented in a visual display, low-literate individuals eye gaze was not tightly locked to phonological overlap in the speech signal but instead strongly influenced by semantic relationships between items. Our present study tests the hypothesis that this behaviour is an emergent property of an increased ability to extract phonological structure from the speech signal, as in the case of high-literates, with low-literates more reliant on syllabic structure. This hypothesis was tested using an emergent connectionist model, based on the Hub-and-spoke models of semantic processing (Dilkina et al, 2008), that integrates linguistic information extracted from the speech signal with visual and semantic information within a central resource. We demonstrate that contrasts in fixation behaviour similar to those observed between high and low literates emerge when the model is trained on either a speech signal segmented by phoneme (i.e. high-literates) or by syllable (i.e. low-literates).
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Phonological grain size and general processing speed modulates language mediated visual attention – Evidence from a connectionist model. Talk presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013]. Marseille, France. 2013-09-02 - 2013-09-04.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Putting rhyme in context: Visual and semantic competition eliminates phonological rhyme effects in language-mediated eye gaze. Talk presented at The 18th Conference of the European Society for Cognitive Psychology [ESCOP 2013]. Budapest, Hungary. 2013-08-29 - 2013-09-01.
  • Huettig, F. (2010). Looking, language, and memory. Talk presented at Language, Cognition, and Emotion Workshop. Delhi, India. 2010-12-06 - 2010-12-06.
  • Huettig, F., & Gastel, A. (2010). Language-mediated eye movements and attentional control: Phonological and semantic competition effects are contigent upon scene complexity. Poster presented at the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010], York, UK.

    Files private

    Request files
  • Huettig, F., Singh, N., & Mishra, R. (2010). Language-mediated prediction is contingent upon formal literacy. Talk presented at Brain, Speech and Orthography Workshop. Brussels, Belgium. 2010-10-15 - 2010-10-16.

    Abstract

    A wealth of research has demonstrated that prediction is a core feature of human information processing. Much less is known, however, about the nature and the extent of predictive processing abilities. Here we investigated whether high levels of language expertise attained through formal literacy are related to anticipatory language-mediated visual orienting. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed to encourage anticipatory eye movements to visual target objects. High literates started to shift their eye gaze to the target object well before target word onset. In the low literacy group this shift of eye gaze occurred more than a second later, well after the onset of the target. Our findings suggest that formal literacy is crucial for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as language-mediated visual orienting.
  • Huettig, F. (2010). Toddlers’ language-mediated visual search: They need not have the words for it. Talk presented at International Conference on Cognitive Development 2010. Allahabad, India. 2010-12-10 - 2010-12-13.

    Abstract

    Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-month-olds lacking colour-term knowledge nonetheless recognised the perceptual-conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.
  • Rommers, J., Huettig, F., & Meyer, A. S. (2010). Task-dependency in the activation of visual representations during language comprehension. Poster presented at The Embodied Mind: Perspectives and Limitations, Nijmegen, The Netherlands.
  • Rommers, J., Huettig, F., & Meyer, A. S. (2010). Task-dependent activation of visual representations during language comprehension. Poster presented at The 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010], York, UK.
  • Brouwer, S., Mitterer, H., & Huettig, F. (2009). Listeners reconstruct reduced forms during spontaneous speech: Evidence from eye movements. Poster presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009), Barcelona, Spain.
  • Brouwer, S., Mitterer, H., & Huettig, F. (2009). Phonological competition during the recognition of spontaneous speech: Effects of linguistic context and spectral cues. Poster presented at 157th Meeting of the Acoustical Society of America, Portland, OR.

    Abstract

    How do listeners recognize reduced forms that occur in spontaneous speech, such as “puter” for “computer”? To this end, eye-tracking experiments were performed in which participants heard a sentence and saw four printed words on a computer screen. The auditory stimuli contained canonical and reduced forms from a spontaneous speech corpus in different amounts of linguistic context. The four printed words were a “canonical form” competitor e.g., “companion”, phonologically similar to “computer”, a “reduced form” competitor e.g., “pupil”, phonologically similar to “puter” and two unrelated distractors. The results showed, first, that reduction inhibits word recognition overall. Second, listeners look more often to the “reduced form” competitor than to the “canonical form” competitor when reduced forms are presented in isolation or in a phonetic context. In full context, however, both competitors attracted looks: early rise of the “reduced form” competitor and late rise of the “canonical form” competitor. This “late rise” of the “canonical form” competitor was not observed when we replaced the original /p/ from “puter” with a real onset /p/. This indicates that phonetic detail and semantic/syntactic context are necessary for the recognition of reduced forms.
  • Huettig, F., & McQueen, J. M. (2009). AM radio noise changes the dynamics of spoken word recognition. Talk presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009). Barcelona, Spain. 2009-09-09.

    Abstract

    Language processing does not take place in isolation from the sensory environment. Listeners are able to recognise spoken words in many different situations, ranging from carefully articulated and noise-free laboratory speech, through casual conversational speech in a quiet room, to degraded conversational speech in a busy train-station. For listeners to be able to recognize speech optimally in each of these listening situations, they must be able to adapt to the constraints of each situation. We investigated this flexibility by comparing the dynamics of the spoken-word recognition process in clear speech and speech disrupted by radio noise. In Experiment 1, Dutch participants listened to clearly articulated spoken Dutch sentences which each included a critical word while their eye movements to four visual objects presented on a computer screen were measured. There were two critical conditions. In the first, the objects included a cohort competitor (e.g., parachute, “parachute”) with the same onset as the critical spoken word (e.g., paraplu, “umbrella”) and three unrelated distractors. In the second condition, a rhyme competitor (e.g., hamer, “hammer”) of the critical word (e.g., kamer, “room”) was present in the display, again with three distractors. To maximize competitor effects pictures of the critical words themselves were not present in the displays on the experimental trials (e.g.,there was no umbrella in the display with the 'paraplu' sentence) and a passive listening task was used (Huettig McQueen, 2007). Experiment 2 was identical to Experiment 1 except that phonemes in the spoken sentences were replaced with radio-signal noises (as in AM radio listening conditions). In each sentence, two,three or four phonemes were replaced with noises. The sentential position of these replacements was unpredictable, but the adjustments were always made to onset phonemes. The critical words (and the immediately surrounding words) were not changed. The question was whether listeners could learn that, under these circumstances, onset information is less reliable. We predicted that participants would look less at the cohort competitors (the initial match to the competitor is less good) and more at the rhyme competitors (the initial mismatch is less bad). We observed a significant experiment by competitor type interaction. In Experiment 1 participants fixated both kinds competitors more than unrelated distractors, but there were more and earlier looks to cohort competitors than to rhyme competitors (Allopenna et al., 1998). In Experiment 2 participants still fixated cohort competitors more than rhyme competitors but the early cohort effect was reduced and the rhyme effect was stronger and occurred earlier. These results suggest that AM radio noise changes the dynamics of spoken word recognition. The well-attested finding of stronger reliance on word onset overlap in speech recognition appears to be due in part to the use of clear speech in most experiments. When onset information becomes less reliable, listeners appear to depend on it less. A core feature of the speech-recognition system thus appears to be its flexibility. Listeners are able to adjust the perceptual weight they assign to different parts of incoming spoken language.
  • Huettig, F. (2009). Language-mediated visual search. Talk presented at Invited talk at VU Amsterdam. Amsterdam.
  • Huettig, F. (2009). On the use of distributional models of semantic space to investigate human cognition. Talk presented at Distributional Semantics beyond Concrete Concepts (Workshop at Annual Meeting of the Cognitive Science Society (CogSci 2009). Amsterdam, The Netherlands. 2009-07-29 - 2009-01-08.
  • Huettig, F. (2009). The role of colour during language-vision interactions. Talk presented at International Conference on Language-Cognition Interface 2009. Allahabad, India. 2009-12-06 - 2009-12-09.
  • Huettig, F., Chen, J., Bowerman, M., & Majid, A. (2008). Linguistic relativity: Evidence from Mandarin speakers’ eye-movements. Talk presented at 14th Annual Conference on the Architectures and Mechanisms for Language Processing [AMLaP 2008]. Cambridge, UK. 2008-09-04 - 2008-09-06.

    Abstract

    If a Mandarin speaker had walked past two rivers and wished to describe how many he had seen, he would have to say “two tiao river”, where tiao designates long, rope-like objects such as rivers, snakes and legs. Tiao is one of several hundred classifiers – a grammatical category in Mandarin. In two eye-tracking studies we presented Mandarin speakers with simple Mandarin sentences through headphones while monitoring their eye-movements to objects presented on a computer monitor. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence general conceptual processing then on hearing the target noun participants should look at objects that are also members of the same classifier category – even when the classifier is not explicitly present. For example, on hearing scissors, Mandarin speakers should look more at a picture of a chair than at an unrelated object because scissors and chair share the classifier ba. This would be consistent with a Strong Whorfian position, according to which language is a major determinant in shaping conceptual thought (Sapir, 1921; Whorf, 1956). A weaker influence of language-on-thought could be predicted, where language shapes cognitive processing, but only when the language-specific category is actively being processed (Slobin, 1996). According to this account, eye-movements are not necessarily drawn to chair when a participant hears scissors, but they would be on hearing ba scissors. This is because hearing ba activates the linguistic category that both scissors and chair belong to. A third logical possibility is that classifiers are purely formal markers (cf. Greenberg, 1972; Lehman, 1979) that do not influence attentional processing even when they are explicitly present. The data showed that when participants heard a spoken word from the same classifier category as a visually depicted object (e.g. scissors-chair), but the classifier was not explicitly presented in the speech, overt attention to classifier-match objects (e.g. chair) and distractor objects did not differ (Experiment 1). But when the classifier was explicitly presented (e.g. ba, Experiment 2), participants shifted overt attention significantly more to classifier-match objects (e.g. chair) than to distractors. These data are incompatible with the Strong Whorfian hypothesis. Instead the findings support the Weak Whorfian hypothesis that linguistic distinctions force attention to properties of the world but only during active linguistic processing of that distinction (cf. Slobin, 1996).

Share this page