Falk Huettig

Presentations

Displaying 1 - 48 of 48
  • Araújo, S., Huettig, F., & Meyer, A. S. (2016). What's the nature of the deficit underlying impaired naming? An eye-tracking study with dyslexic readers. Talk presented at IWORDD - International Workshop on Reading and Developmental Dyslexia. Bilbao, Spain. 2016-05-05 - 2016-05-07.

    Abstract

    Serial naming deficits have been identified as core symptoms of developmental dyslexia. A prominent hypothesis is that naming delays are due to inefficient phonological encoding, yet the exact nature of this underlying impairment remains largely underspecified. Here we used recordings of eye movements and word onset latencies to examine at what processing level the dyslexic naming deficit emerges: localized at an early stage of lexical encoding or rather later at the level of phonetic or motor planning. 23 dyslexic and 25 control adult readers were tested on a serial object naming task for 30 items and an analogous reading task, where phonological neighborhood density and word-frequency were manipulated. Results showed that both word properties influenced early stages of phonological activation (first fixation and first-pass duration) equally in both groups of participants. Moreover, in the control group any difficulty appeared to be resolved early in the reading process, while for dyslexic readers a processing disadvantage for low-frequency words and for words with sparse neighborhood also emerged in a measure that included late stages of output planning (eye-voice span). Thus, our findings suggest suboptimal phonetic and/or articulatory planning in dyslexia.
  • Eisner, F., Kumar, U., Mishra, R. K., Nand Tripathi, V., Guleria, A., Prakash Singh, J., & Huettig, F. (2016). Literacy acquisition drives hemispheric lateralization of reading. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2016). Bilbao, Spain. 2016-09-01 - 2016-09-03.

    Abstract

    Reading functions beyond early visual precessing are known to be lateralized to the left hemisphere, but how left-lateralization arises during literacy acquisition is an open question. Bilateral processing or rightward asymmetries have previously been associated with developmental dyslexia. However, it is unclear at present to what extent this lack of left-lateralization reflects differences in reading ability. In this study, a group of illiterate adults in rural India (N=29) participated in a literacy training program over the course of six months. fMRI measures were obtained before and after training on a number of different visual stimulus categories, including written sentences, false fonts, and object categories such as houses and faces. This training group was matched on demographic and socioeconomic variables to an illiterate no-training group and to low- and highly-literate control groups, who were also scanned twice but received no training (total N=90). In a cross-sectional analysis before training, reading ability was positively correlated with increased BOLD responses in a left-lateralized network including the dorsal and ventral visual streams for text and false fonts, but not for other types of visual stimuli. A longitudinal analysis of learning effects in the training group showed that beginning readers engage bilateral networks more than proficient readers. Lateralization of BOLD responses was further examined by calculating laterality indices in specific regions. We observed training-related changes in lateralization for processing written stimuli in a number of subregions in the dorsal and ventral visual streams, as well as in the cerebellum. Together with the cross-sectional results, these data suggest a causal relationship between reading ability and the degree of hemispheric asymmetry in processing written materials.
  • Eisner, F., Kumar, U., Mishra, R. K., Nand Tripathi, V., Guleria, A., Prakash Singh, J., & Huettig, F. (2016). Literacy acquisition drives hemispheric lateralization of reading. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Reading functions beyond early visual precessing are known to be lateralized to the left hemisphere, but how left-lateralization arises during literacy acquisition is an open question. Bilateral processing or rightward asymmetries have previously been associated with developmental dyslexia. However, it is unclear at present to what extent this lack of left-lateralization reflects differences in reading ability. In this study, a group of illiterate adults in rural India (N=29) participated in a literacy training program over the course of six months. fMRI measures were obtained before and after training on a number of different visual stimulus categories, including written sentences, false fonts, and object categories such as houses and faces. This training group was matched on demographic and socioeconomic variables to an illiterate no-training group and to low- and highly-literate control groups, who were also scanned twice but received no training (total N=90). In a cross-sectional analysis before training, reading ability was positively correlated with increased BOLD responses in a left-lateralized network including the dorsal and ventral visual streams for text and false fonts, but not for other types of visual stimuli. A longitudinal analysis of learning effects in the training group showed that beginning readers engage bilateral networks more than proficient readers. Lateralization of BOLD responses was further examined by calculating laterality indices in specific regions. We observed training-related changes in lateralization for processing written stimuli in a number of subregions in the dorsal and ventral visual streams, as well as in the cerebellum. Together with the cross-sectional results, these data suggest a causal relationship between reading ability and the degree of hemispheric asymmetry in processing written materials.
  • Eisner, F., Kumar, U., Mishra, R. K., Nand Tripathi, V., Guleria, A., Prakash Singh, J., & Huettig, F. (2016). Literacy acquisition drives hemispheric lateralization of reading. Talk presented at the 31st International Congress of Psychology (ICP2016). Yokohoma, Japan. 2016-07-24 - 2016-07-29.

    Abstract

    Reading functions beyond early visual precessing are known to be lateralized to the left hemisphere, but how left-lateralization arises during literacy acquisition is an open question. Bilateral processing or rightward asymmetries have previously been associated with developmental dyslexia. However, it is unclear at present to what extent this lack of left-lateralization reflects differences in reading ability. In this study, a group of illiterate adults in rural India (N=29) participated in a literacy training program over the course of six months. fMRI measures were obtained before and after training on a number of different visual stimulus categories, including written sentences, false fonts, and object categories such as houses and faces. This training group was matched on demographic and socioeconomic variables to an illiterate no-training group and to low- and highly-literate control groups, who were also scanned twice but received no training (total N=90). In a cross-sectional analysis before training, reading ability was positively correlated with increased BOLD responses in a left-lateralized network including the dorsal and ventral visual streams for text and false fonts, but not for other types of visual stimuli. A longitudinal analysis of learning effects in the training group showed that beginning readers engage bilateral networks more than proficient readers. Lateralization of BOLD responses was further examined by calculating laterality indices in specific regions. We observed training-related changes in lateralization for processing written stimuli in a number of subregions in the dorsal and ventral visual streams, as well as in the cerebellum. Together with the cross-sectional results, these data suggest a causal relationship between reading ability and the degree of hemispheric asymmetry in processing written materials.
  • Huettig, F., Kumar, U., Mishra, R., Tripathi, V. N., Guleria, A., Prakash Singh, J., Eisner, F., & Skeide, M. A. (2016). Learning to read alters intrinsic cortico-subcortical cross-talk in the low-level visual system. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    INTRODUCTION fMRI findings have revealed the important insight that literacy-related learning triggers cognitive adaptation mechanisms manifesting themselves in increased BOLD responses during print processing tasks (Brem et al., 2010; Carreiras et al., 2009; Dehaene et al., 2010). It remains elusive, however, if the cortical plasticity effects of reading acquisition also lead to an intrinsic functional reorganization of neural circuits. METHODS Here, we used resting-state fMRI as a measure of domain-specific spontaneous neuronal activity to capture the impact of reading acquisition on the functional connectome (Honey et al., 2007; Lohmann et al., 2010; Raichle et al., 2001). In a controlled longitudinal intervention study, we taught 21 illiterate adults from Northern India for 6 months how to read Hindi scripts and compared their resting-state fMRI data with those acquired from a sample of 9 illiterates, matched for demographic and socioeconomic variables, that did not undergo such instruction. RESULTS Initially, we investigated at the whole-brain level, if the experience of becoming literate modifies network nodes of spontaneous hemodynamic activity. Therefore, we compared training-related differences in the degree centrality of BOLD signals between the groups (Zuo et al., 2012). A significant group by time interaction (tmax = 4.17, p < 0.005, corrected for cluster size) was found in a cluster extending from the right superior colliculus of the brainstem (+6, -30, -3) to the bilateral pulvinar nuclei of the thalamus (+6, -18, -3; -6, -21, -3). This interaction was characterized by a significant mean degree centrality increase in the trained group (t(1,20) = 8.55, p < 0.001) that did not appear in the untrained group which remained at its base level (t(1,8) = 0.14, p = 0.893). The cluster obtained from the degree centrality analysis was then used as a seed region in a voxel-wise functional connectivity analysis (Biswal et al., 1995). A significant group by time interaction (tmax = 4.45, p < 0.005, corrected for cluster size) emerged in the right occipital cortex (+24, -81, +15; +24, -93, +12; +33, -90, +3). The cortico-subcortical mean functional connectivity got significantly stronger in the group that took part in the reading program (z = 3.77, p < 0.001) but not in the group that remained illiterate (z = 0.77, p = 0.441). Individual slopes of cortico-subcortical connectivity were significantly associated with the improvement in letter knowledge (r = 0.40, p = 0.014) and with the improvement word reading ability (r = 0.38, p = 0.018). CONCLUSION Intrinsic hemodynamic activity changes driven by literacy occurred in subcortical low-level relay stations of the visual pathway and their functional connections to the occipital cortex. Accordingly, the visual system of beginning readers appears to go through fundamental modulations at earlier processing stages than suggested by previous event-related fMRI experiments. Our results add a new dimension to current concepts of the brain basis of reading and raise novel questions regarding the neural origin of developmental dyslexia.
  • Huettig, F. (2016). Is prediction necessary to understand language?. Talk presented at the RefNet Round Table conference. Aberdeen, Scotland. 2016-01-15 - 2016-01-16.

    Abstract

    Many psycholinguistic experiments suggest that prediction is an important characteristic of language processing. Some recent theoretical accounts in the cognitive sciences (e.g., Clark, 2013; Friston, 2010) and psycholinguistics (e.g., Dell & Chang, 2014) appear to suggest that prediction is even necessary to understand language. I will evaluate this proposal. I will first discuss several arguments that may appear to be in line with the notion that prediction is necessary for language processing. These arguments include that prediction provides a unified theoretical principle of the human mind and that it pervades cortical function. We discuss whether evidence of human abilities to detect statistical regularities is necessarily evidence for predictive processing and evaluate suggestions that prediction is necessary for language learning. Five arguments are then presented that question the claim that all language processing is predictive in nature. I point out that not all language users appear to predict language and that suboptimal input makes prediction often very challenging. Prediction, moreover, is strongly context-dependent and impeded by resource limitations. I will also argue that it may be problematic that most experimental evidence for predictive language processing comes from 'prediction-encouraging' experimental set-ups. Finally, I will discuss possible ways that may lead to a further resolution of this debate. We conclude that languages can be learned and understood in the absence of prediction. Claims that all language processing is predictive in nature are premature.
  • Huettig, F. (2016). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Psychology Department, University of Brussels. Brussels, Belgium. 2016-10.
  • Huettig, F., Kumar, U., Mishra, R. K., Tripathi, V., Guleria, A., Prakash Singh, J., & Eisner, F. (2016). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the International meeting of the Psychonomic Society. Granada, Spain. 2016-05-05 - 2016-05-08.

    Abstract

    How do human cultural inventions such as reading result in neural re-organization? In this first longitudinal study with young completely illiterate adult participants, we measured brain responses to speech, text, and other categories of visual stimuli with fMRI before and after a group of illiterate participants in India completed a literacy training program in which they learned to read and write Devanagari script. A literate and an illiterate no-training control group were matched to the training group in terms of socioeconomic background and were recruited from the same societal community in two villages of a rural area near Lucknow, India. This design permitted investigating effects of literacy cross-sectionally across groups before training (N=86) as well as longitudinally (training group N=25). The two analysis approaches yielded converging results: Literacy was associated with enhanced, left-lateralized responses to written text along the ventral stream (including lingual gyrus, fusiform gyrus, and parahippocampal gyrus), dorsal stream (intraparietal sulcus), and (pre-) motor systems (pre-central sulcus, supplementary motor area) and thalamus (pulvinar). Significantly reduced responses were observed bilaterally in the superior parietal lobe (precuneus) and in the right angular gyrus. These effects corroborate and extend previous findings from cross-sectional studies. However, effects of literacy were specific to written text and (to a lesser extent) to false fonts. We did not find any evidence for effects of literacy on responses in the auditory cortex in our Hindi-speaking participants. This raises questions about the extent to which phonological representations are altered by literacy acquisition.
  • Ostarek, M., Ishag, A., & Huettig, F. (2016). Language comprehension does not require perceptual simulation. Poster presented at the 23rd Annual Meeting of the Cognitive Neuroscience Society (CNS 2016), New York, NY, USA.
  • Ostarek, M., & Huettig, F. (2016). Sensory representations are causally involved in cognition but only when the task requires it. Talk presented at the 3rd Attentive Listener in the Visual World (AttLis) workshop. Potsdam, Germany. 2016-05-10 - 2016-05-11.
  • Ostarek, M., & Huettig, F. (2016). Spoken words can make the invisible visible: Testing the involvement of low-level visual representations in spoken word processing. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. Although some neuroimaging evidence is consistent with such a prediction (Desai et al., 2009; Hwang et al., 2009; Lewis & Poeppel, 2014), these findings do not tell us much about the nature of the representations that were accessed. In the present study, we directly tested whether low-level visual cortex is involved in spoken word processing. Using continuous flash suppression we show that spoken words activate behaviorally relevant low-level visual representations and pin down the time-course of this effect to the first hundreds of milliseconds after word onset. We investigated whether participants (N=24) can detect otherwise invisible objects (presented for 400ms) when they are presented with the corresponding spoken word 200ms before the picture appears. We implemented a design in which all cue words appeared equally often in picture-present (50%) and picture-absent trials (50%). In half of the picture-present trials, the spoken word was congruent with the target picture ("bottle" -> picture of a bottle), while on the other half it was incongruent ("bottle" -> picture of a banana). All picture stimuli were evenly distributed over the experimental conditions to rule out low-level differences that can affect detectability regardless of the prime words. Our results showed facilitated detection for congruent vs. incongruent pictures in terms of hit rates (z=-2.33, p=0.02) and d'-scores (t=3.01, p<0.01). A second experiment (N=33) investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400ms after word onset and decays at around word offset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, i.e. what we see. More generally our findings fit best with the notion that spoken words activate modality-specific visual representations that are low-level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens (a signature of episodic memory) but also for generalizing to novel exemplars one has never seen before.
  • Ostarek, M., & Huettig, F. (2016). Spoken words can make the invisible visible: Testing the involvement of low-level visual representations in spoken word processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. Although some neuroimaging evidence is consistent with such a prediction (Desai et al., 2009; Hwang et al., 2009; Lewis & Poeppel, 2014), these findings do not tell us much about the nature of the representations that were accessed. In the present study, we directly tested whether low-level visual cortex is involved in spoken word processing. Using continuous flash suppression we show that spoken words activate behaviorally relevant low-level visual representations and pin down the time-course of this effect to the first hundreds of milliseconds after word onset. We investigated whether participants (N=24) can detect otherwise invisible objects (presented for 400ms) when they are presented with the corresponding spoken word 200ms before the picture appears. We implemented a design in which all cue words appeared equally often in picture-present (50%) and picture-absent trials (50%). In half of the picture-present trials, the spoken word was congruent with the target picture ("bottle" -> picture of a bottle), while on the other half it was incongruent ("bottle" -> picture of a banana). All picture stimuli were evenly distributed over the experimental conditions to rule out low-level differences that can affect detectability regardless of the prime words. Our results showed facilitated detection for congruent vs. incongruent pictures in terms of hit rates (z=-2.33, p=0.02) and d'-scores (t=3.01, p<0.01). A second experiment (N=33) investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400ms after word onset and decays at around word offset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, i.e. what we see. More generally our findings fit best with the notion that spoken words activate modality-specific visual representations that are low-level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens (a signature of episodic memory) but also for generalizing to novel exemplars one has never seen before.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2016). Testing alternative architectures for multimodal integration during spoken language processing in the visual world. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Current cognitive models of spoken word recognition and comprehension are underspecified with respect to when and how multimodal information interacts. We compare two computational models both of which permit the integration of concurrent information within linguistic and non-linguistic processing streams, however their architectures differ critically in the level at which multimodal information interacts. We compare the predictions of the Multimodal Integration Model (MIM) of language processing (Smith, Monaghan & Huettig, 2014), which implements full interactivity between modalities, to a model in which interaction between modalities is restricted to lexical representations which we represent by an extended multimodal version of the TRACE model of spoken word recognition (McClelland & Elman, 1986). Our results demonstrate that previous visual world data sets involving phonological onset similarity are compatible with both models, whereas our novel experimental data on rhyme similarity is able to distinguish between competing architectures. The fully interactive MIM system correctly predicts a greater influence of visual and semantic information relative to phonological rhyme information on gaze behaviour, while by contrast a system that restricts multimodal interaction to the lexical level overestimates the influence of phonological rhyme, thereby providing an upper limit for when information interacts in multimodal tasks
  • Smith, A. C., Monaghan, P., & Huettig, F. (2016). The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Current cognitive models of spoken word recognition and comprehension are underspecified with respect to when and how multimodal information interacts. We compare two computational models both of which permit the integration of concurrent information within linguistic and non-linguistic processing streams, however their architectures differ critically in the level at which multimodal information interacts. We compare the predictions of the Multimodal Integration Model (MIM) of language processing (Smith, Monaghan & Huettig, 2014), which implements full interactivity between modalities, to a model in which interaction between modalities is restricted to lexical representations which we represent by an extended multimodal version of the TRACE model of spoken word recognition (McClelland & Elman, 1986). Our results demonstrate that previous visual world data sets involving phonological onset similarity are compatible with both models, whereas our novel experimental data on rhyme similarity is able to distinguish between competing architectures. The fully interactive MIM system correctly predicts a greater influence of visual and semantic information relative to phonological rhyme information on gaze behaviour, while by contrast a system that restricts multimodal interaction to the lexical level overestimates the influence of phonological rhyme, thereby providing an upper limit for when information interacts in multimodal tasks.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2016). The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration. Talk presented at the 15th Neural Computation and Psychology Workshop: Contemporary Neural Network Models (NCPW15). Philadelphia, PA, USA. 2016-08-08 - 2016-08-09.
  • Speed, L., Chen, J., Huettig, F., & Majid, A. (2016). Do classifier categories affect or reflect object concepts?. Talk presented at the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Philadelphia, PA, USA. 2016-08-10 - 2016-08-13.

    Abstract

    We conceptualize objects based on sensory and motor information gleaned from real-world experience. But to what extent is such conceptual information structured according to higher level linguistic features too? Here we investigate whether classifiers, a grammatical category, shape the conceptual representations of objects. In three experiments native Mandarin speakers (speakers of a classifier language) and native Dutch speakers (speakers of a language without classifiers) judged the similarity of a target object (presented as a word or picture) with four objects (presented as words or pictures). One object shared a classifier with the target, the other objects did not, serving as distractors. Across all experiments, participants judged the target object as more similar to the object with the shared classifier than distractor objects. This effect was seen in both Dutch and Mandarin speakers, and there was no difference between the two languages. Thus, even speakers of a non-classifier language are sensitive to object similarities underlying classifier systems, and using a classifier system does not exaggerate these similarities. This suggests that classifier systems simply reflect, rather than affect, conceptual structure.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2014). Mechanisms underlying predictive language processing. Talk presented at the 56. Tagung experimentell arbeitender Psychologen [TeaP, Conference on Experimental Psychology]. Giessen, Germany. 2014-03-31 - 2014-04-02.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2014). Prediction using production or production engaging prediction?. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh (UK).

    Abstract

    Prominent theories of predictive language processing assume that language production processes are used to anticipate upcoming linguistic input during comprehension (Dell & Chang, 2014; Pickering & Garrod, 2013). Here, we explore the converse case: Does a task set including production in addition to comprehension encourage prediction, compared to a task only including comprehension? To test this hypothesis, we conducted a cross-modal naming experiment (Experiment 1) including an object naming task and a self-paced reading experiment (Experiment 2) that did not include overt production. We used the same predictable (N = 40) and non-predictable (N = 40) sentences in both experiments. The sentences consisted of a fixed agent, a transitive verb and a predictable or non-predictable target word (The man drinks a beer vs. The man buys a beer). Most of the empirical work on prediction used sentences in which the target words were highly predictable (often with a mean cloze probability > .8) and thus it is little surprising that participants engaged in predictive language processing very easily. In the current sentences, the mean cloze probability in the predictable sentences was .39 (ranging from .06 to .8; zero in the non-predictable sentences). If comprehenders are more likely to engage in predictive processing when the task set involves production, we should observe more pronounced effects of prediction in Experiment 1 as compared to Experiment 2. If production does not enhance prediction, we should observe similar effects of prediction in both experiments. In Experiment 1, participants (N = 54) listened to recordings of the sentences which ended right before the spoken target word. Coinciding with the end of the playback, a picture of the target word was shown which the participants were asked to name as fast as possible. Analyses of their naming latencies revealed a statistically significant naming advantage of 106 ms on predictable over non-predictable trials. Moreover, we found that the objects’ naming advantage was predicted by the target words’ cloze probability in the sentences (r = .411, p = .016). In Experiment 2, the same sentences were used in a self-paced reading experiment. To allow for testing of potential spill-over effects, we added a neutral prepositional phrase (buys a beer from the bar keeper/drinks a beer from the shop) to each sentence. Participants (N = 54) read the sentences word-by-word, advancing by pushing the space bar. On 30% of the trials, comprehension questions were used to keep up participants' focus on comprehending the sentences. Analyses of participants’ target and post-target reading times revealed numerical advantages of 6 ms and 20 ms, respectively, in the predictable as compared to the non-predictable condition. However, in both cases, this difference was not statistically reliable (t = .757, t = 1.43) and the significant positive correlation between an item’s naming advantage and its cloze probability as seen in Experiment 1 was absent (r = .037, p = .822). Importantly, the analysis of participants' responses to the comprehension questions, showed that they understood the sentences (mean accuracy = 93%). To conclude, although both experiments used the same sentences, we observed effects of prediction only when the task included production. In Experiment 2, no evidence for anticipation was found although participants clearly understood the sentences and the method has previously been shown to be sensitive to measure prediction effects (Van Berkum et al., 2005). Our results fit with a recent study by Gollan et al. (2011) who found only a small processing advantage of predictive over non-predictive sentences in reading (using highly predictable sentences with a cloze probability > . 87) but a strong prediction effect when participants read the same sentences and carried out an additional object naming task (see also Griffin & Bock, 1998). Taken together, the studies suggest that the comprehenders' task set exerts a powerful influence on the likelihood and magnitude of predictive language processing. When the task set involves language production, as is often the case in natural conversation, comprehenders might engage in prediction to a stronger degree than in pure comprehension tasks. Being able to predict words another person is about to say might optimize the comprehension process and enable smooth turn-taking.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2014). The influence of verb-specific featural restrictions, word associations, and production-based mechanisms on language-mediated anticipatory eye movements. Talk presented at the 27th annual CUNY conference on human sentence processing. Ohio State University, Columbus/Ohio (US). 2014-03-13 - 2014-03-15.
  • Huettig, F., & Guerra, E. (2014). Context-dependent mapping of linguistic and color representations challenges strong forms of embodiment. Talk presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014). Edinburgh, UK. 2014-09-03 - 2014-09-06.

    Abstract

    A central claim of embodied theories of cognition is that sensory representations are
    routinely activated and influence language processing even in the absence of relevant
    sensory input (cf. Pulvermüller, 2005; Wassenburg & Zwaan, 2010). We tested the influence
    of color representations during language processing in three visual world eye tracking
    experiments. The method is particularly well suited to investigate this issue because the
    availability of relevant visual input can be manipulated.
    We made use of the phenomena that when participants hear a word that refers to a
    visual object or printed word they quickly direct their eye gaze to objects or printed words
    which are similar (e.g. semantically or visually) to the heard word. We used a look and listen
    task which previously has been shown to be sensitive to such relationships between spoken
    words and visual items. In Experiment 1, on experimental trials, participants listened to
    sentences containing a critical target word associated with a prototypical color (e.g.
    '...spinach...') as they inspected a visual display with four words printed in black font. One of
    the four printed words was associated with the same prototypical color (e.g. green) as the
    spoken target word (e.g. FROG). On experimental trials, the spoken target word did not have
    a printed word counterpart (SPINACH was not present in the display). In filler trials (70% of
    trials) the target was present in the display and attracted significantly more overt attention
    than the unrelated distractors. In experimental trials color competitors were not looked at
    more than the distractors. In Experiment 2 the printed words were replaced with line
    drawings of the objects. In order to direct the attentional focus of our participants toward
    color features we used a within-participants counter-balanced design and alternated color
    and greyscale trials randomly throughout the experiment. Therefore, on one trial our
    participants heard a word such as 'spinach' and saw a frog (colored in green) in the visual
    display. On the next trial however they saw a banana (in greyscale) on hearing 'canary'
    (bananas and canaries are typically yellow), etc. The presence (or absence) of color was
    thus a salient property of the experiment. Participants looked more at color competitors than
    unrelated distractors on hearing the target word in the color trials but not in the greyscale
    trials, i.e. on hearing 'spinach' they looked at the green frog but not the greyscale frog.
    Experiment 3 was identical to Experiment 2, except that the visual display was removed at
    the sentence onset, after a longer preview. This experiment examined whether the continued
    presence of color in the immediate visual environment was necessary for the observation of
    color-mediated eye movements. Eye movements directed towards the now blank screen
    were recorded as the sentence unfolded (cf. Spivey & Geng, 2001). In the filler trials,
    participants looked significantly more at the locations where the targets, rather than the
    distractors, had been previously presented as the target words acoustically unfolded. In the
    experimental trials, the locations where the color competitors had previously been presented
    did not attract increased attention (neither in color nor greyscale trials).
    These data demonstrate that language-mediated eye movements are only influenced
    by color relations between spoken words and visually displayed items if color is present in the immediate visual environment. We conclude that color representations are unlikely to be
    routinely activated in language processing. Our findings provide strong constraints for
    embodied theories of cognition which assume that sensory representations influence language processing even in the absence of relevant sensory input. These results fit best with the notion that the main role of sensory representations in language processing is a different one, namely to contextualize language in the immediate environment, connecting language to the here and now.
  • Huettig, F. (2015). Does prediction in language comprehension involve language production?. Talk presented at the Comprehension=Production? workshop. Nijmegen, the Netherlands. 2015-03-26 - 2015-03-28.

    Abstract

    The notion that predicting upcoming linguistic information in language comprehension makes use of the production system has recently received much attention (e.g., Chang et al., 2006; Dell & Chang, 2014; Federmeier, 2007; Pickering & Garrod, 2007, 2013; Van Berkum et al., 2005). So far there has been little experimental evidence for a relation between prediction and production. I will discuss the results of several recent eye-tracking experiments with toddlers (Mani & Huettig, 2012) and adults (Rommers et al. submitted, Hintz et al., in prep.) which provide some support for the view that production abilities are linked to language-mediated anticipatory eye movements. These data however also indicate that production-based prediction is situation-dependent and only one of many mechanisms supporting prediction. Taken together, these results suggest that multiple-mechanism accounts are required to provide a complete picture of anticipatory language processing.
  • Huettig, F. (2014). How embodied is language processing?. Talk presented at the 2nd Attentive Listener in the Visual World workshop. Hyderabad, India. 2014-11-03 - 2014-11-05.
  • Huettig, F. (2014). How literacy acquisition affects the illiterate mind. Talk presented at the Low Educated Second Language and Literacy Acquisition (LESLLA). Nijmegen, Netherlands. 2014-08-28 - 2014-08-30.
  • Huettig, F. (2014). Literacy influences on predictive language processing and visual search. Talk presented at the Priming across Modalities: The Influence of Orthography on Sign and Spoken Language Processing workshop. Haifa, Israel. 2014-04.
  • Huettig, F. (2014). The context-dependent influence of colour representations during language-vision interactions constrains theories of conceptual processing. Talk presented at the Color in Concepts workshop. Düsseldorf, Germany. 2014-06-02 - 2014-06-03.
  • Rommers, J., & Huettig, F. (2014). Limits to cross-modal semantic and object shape priming in sentence context. Poster presented at the Society for the Neurobiology of Language [SNL 2014], Amsterdam, the Netherlands.
  • Rommers, J., & Huettig, F. (2014). Limits to cross-modal semantic and object shape priming in sentence context. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, UK.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). A comprehensive model of spoken word recognition must be multimodal: Evidence from studies of language mediated visual attention. Talk presented at the 36th Annual Conference of the Cognitive Science Society [CogSci 2014]. Quebec, Canada. 2014-07-23 - 2014-07-26.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Examining strains and symptoms of the ‘Literacy Virus’: The effects of orthographic transparency on phonological processing in a connectionist model of reading. Talk presented at the 36th Annual Conference of the Cognitive Science Society [CogSci 2014]. Quebec, Canada. 2014-07-23 - 2014-07-26.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Examining the effects of orthographic transparency on phonological and semantic processing within a connectionist implementation of the triangle model of reading. Talk presented at the 14th Neural Computation and Psychology Workshop [NCPW 14]. Lancaster, U.K. 2014-08-21 - 2014-08-23.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2014). Strains and symptoms of the ‘literacy virus’: Modelling the effects of orthographic transparency on phonological processing. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, UK.
  • Huettig, F. (2013). Anticipatory eye movements and predictive language processing. Talk presented at the ZiF research group on "Competition and Priority Control in Mind and Brain. Bielefeld, Germany. 2013-07.
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Literacy as a proxy for experience: Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Huettig, F., Mishra, R. K., Kumar, U., Singh, J. P., Guleria, A., & Tripathi, V. (2013). Phonemic and syllabic awareness of adult literates and illiterates in an Indian alphasyllabic language. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Talk presented at the 11th International Symposium of Psycholinguistics. Tenerife, Spain. 2013-03-20 - 2013-03-23.
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Talk presented at the 54th Annual Meeting of the Psychonomic Society. Toronto, Canada. 2013-11-14 - 2013-11-17.
  • Janse, E., Huettig, F., & Jesse, A. (2013). Working memory modulates the immediate use of context for recognizing words in sentences. Talk presented at the 5th Workshop on Speech in Noise: Intelligibility and Quality. Vitoria, Spain. 2013-01-10 - 2013-01-11.
  • Lai, V. T., & Huettig, F. (2013). When anticipation meets emotion: EEG evidence for distinct processing mechanisms. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language, San Diego, US.
  • Lai, V. T., & Huettig, F. (2013). When anticipation meets emotion: EEG evidence for distinct processing mechanisms. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Mani, N., & Huettig, F. (2013). Reading ability predicts anticipatory language processing in 8 year olds. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.
  • Rommers, J., Meyer, A. S., Piai, V., & Huettig, F. (2013). Constraining the involvement of language production in comprehension: A comparison of object naming and object viewing in sentence context. Talk presented at the 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013]. Marseille, France. 2013-09-02 - 2013-09-04.
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2013). Anticipating references to objects during sentence comprehension. Talk presented at the Experimental Psychology Society meeting (EPS). Bangor, UK. 2013-07-03 - 2013-07-05.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Both phonological grain-size and general processing speed determine literacy related differences in language mediated eye gaze: Evidence from a connectionist model. Poster presented at The 18th Conference of the European Society for Cognitive Psychology [ESCOP 2013], Budapest, Hungary.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Modelling the effect of literacy on multimodal interactions during spoken language processing in the visual world. Talk presented at Tagung experimentell arbeitender Psychologen. [TEAP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.

    Abstract

    Recent empirical evidence suggests that language-mediated eye gaze around the visual world varies across individuals and is partly determined by their level of formal literacy training. Huettig, Singh & Mishra (2011) showed that unlike high-literate individuals, whose eye gaze was closely time locked to phonological overlap between a spoken target word and items presented in a visual display, low-literate individuals eye gaze was not tightly locked to phonological overlap in the speech signal but instead strongly influenced by semantic relationships between items. Our present study tests the hypothesis that this behaviour is an emergent property of an increased ability to extract phonological structure from the speech signal, as in the case of high-literates, with low-literates more reliant on syllabic structure. This hypothesis was tested using an emergent connectionist model, based on the Hub-and-spoke models of semantic processing (Dilkina et al, 2008), that integrates linguistic information extracted from the speech signal with visual and semantic information within a central resource. We demonstrate that contrasts in fixation behaviour similar to those observed between high and low literates emerge when the model is trained on either a speech signal segmented by phoneme (i.e. high-literates) or by syllable (i.e. low-literates).
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Phonological grain size and general processing speed modulates language mediated visual attention – Evidence from a connectionist model. Talk presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013]. Marseille, France. 2013-09-02 - 2013-09-04.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Putting rhyme in context: Visual and semantic competition eliminates phonological rhyme effects in language-mediated eye gaze. Talk presented at The 18th Conference of the European Society for Cognitive Psychology [ESCOP 2013]. Budapest, Hungary. 2013-08-29 - 2013-09-01.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Semantic and visual competition eliminates the influence of rhyme overlap in spoken language processing. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013], Marseille, France.
  • Huettig, F., Chen, J., Bowerman, M., & Majid, A. (2008). Linguistic relativity: Evidence from Mandarin speakers’ eye-movements. Talk presented at 14th Annual Conference on the Architectures and Mechanisms for Language Processing [AMLaP 2008]. Cambridge, UK. 2008-09-04 - 2008-09-06.

    Abstract

    If a Mandarin speaker had walked past two rivers and wished to describe how many he had seen, he would have to say “two tiao river”, where tiao designates long, rope-like objects such as rivers, snakes and legs. Tiao is one of several hundred classifiers – a grammatical category in Mandarin. In two eye-tracking studies we presented Mandarin speakers with simple Mandarin sentences through headphones while monitoring their eye-movements to objects presented on a computer monitor. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence general conceptual processing then on hearing the target noun participants should look at objects that are also members of the same classifier category – even when the classifier is not explicitly present. For example, on hearing scissors, Mandarin speakers should look more at a picture of a chair than at an unrelated object because scissors and chair share the classifier ba. This would be consistent with a Strong Whorfian position, according to which language is a major determinant in shaping conceptual thought (Sapir, 1921; Whorf, 1956). A weaker influence of language-on-thought could be predicted, where language shapes cognitive processing, but only when the language-specific category is actively being processed (Slobin, 1996). According to this account, eye-movements are not necessarily drawn to chair when a participant hears scissors, but they would be on hearing ba scissors. This is because hearing ba activates the linguistic category that both scissors and chair belong to. A third logical possibility is that classifiers are purely formal markers (cf. Greenberg, 1972; Lehman, 1979) that do not influence attentional processing even when they are explicitly present. The data showed that when participants heard a spoken word from the same classifier category as a visually depicted object (e.g. scissors-chair), but the classifier was not explicitly presented in the speech, overt attention to classifier-match objects (e.g. chair) and distractor objects did not differ (Experiment 1). But when the classifier was explicitly presented (e.g. ba, Experiment 2), participants shifted overt attention significantly more to classifier-match objects (e.g. chair) than to distractors. These data are incompatible with the Strong Whorfian hypothesis. Instead the findings support the Weak Whorfian hypothesis that linguistic distinctions force attention to properties of the world but only during active linguistic processing of that distinction (cf. Slobin, 1996).

Share this page