Falk Huettig

Presentations

Displaying 1 - 10 of 10
  • Eisner, F., Kumar, U., Mishra, R. K., Nand Tripathi, V., Guleria, A., Prakash Singh, J., & Huettig, F. (2016). Literacy acquisition drives hemispheric lateralization of reading. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Reading functions beyond early visual precessing are known to be lateralized to the left hemisphere, but how left-lateralization arises during literacy acquisition is an open question. Bilateral processing or rightward asymmetries have previously been associated with developmental dyslexia. However, it is unclear at present to what extent this lack of left-lateralization reflects differences in reading ability. In this study, a group of illiterate adults in rural India (N=29) participated in a literacy training program over the course of six months. fMRI measures were obtained before and after training on a number of different visual stimulus categories, including written sentences, false fonts, and object categories such as houses and faces. This training group was matched on demographic and socioeconomic variables to an illiterate no-training group and to low- and highly-literate control groups, who were also scanned twice but received no training (total N=90). In a cross-sectional analysis before training, reading ability was positively correlated with increased BOLD responses in a left-lateralized network including the dorsal and ventral visual streams for text and false fonts, but not for other types of visual stimuli. A longitudinal analysis of learning effects in the training group showed that beginning readers engage bilateral networks more than proficient readers. Lateralization of BOLD responses was further examined by calculating laterality indices in specific regions. We observed training-related changes in lateralization for processing written stimuli in a number of subregions in the dorsal and ventral visual streams, as well as in the cerebellum. Together with the cross-sectional results, these data suggest a causal relationship between reading ability and the degree of hemispheric asymmetry in processing written materials.
  • Huettig, F., Kumar, U., Mishra, R., Tripathi, V. N., Guleria, A., Prakash Singh, J., Eisner, F., & Skeide, M. A. (2016). Learning to read alters intrinsic cortico-subcortical cross-talk in the low-level visual system. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    INTRODUCTION fMRI findings have revealed the important insight that literacy-related learning triggers cognitive adaptation mechanisms manifesting themselves in increased BOLD responses during print processing tasks (Brem et al., 2010; Carreiras et al., 2009; Dehaene et al., 2010). It remains elusive, however, if the cortical plasticity effects of reading acquisition also lead to an intrinsic functional reorganization of neural circuits. METHODS Here, we used resting-state fMRI as a measure of domain-specific spontaneous neuronal activity to capture the impact of reading acquisition on the functional connectome (Honey et al., 2007; Lohmann et al., 2010; Raichle et al., 2001). In a controlled longitudinal intervention study, we taught 21 illiterate adults from Northern India for 6 months how to read Hindi scripts and compared their resting-state fMRI data with those acquired from a sample of 9 illiterates, matched for demographic and socioeconomic variables, that did not undergo such instruction. RESULTS Initially, we investigated at the whole-brain level, if the experience of becoming literate modifies network nodes of spontaneous hemodynamic activity. Therefore, we compared training-related differences in the degree centrality of BOLD signals between the groups (Zuo et al., 2012). A significant group by time interaction (tmax = 4.17, p < 0.005, corrected for cluster size) was found in a cluster extending from the right superior colliculus of the brainstem (+6, -30, -3) to the bilateral pulvinar nuclei of the thalamus (+6, -18, -3; -6, -21, -3). This interaction was characterized by a significant mean degree centrality increase in the trained group (t(1,20) = 8.55, p < 0.001) that did not appear in the untrained group which remained at its base level (t(1,8) = 0.14, p = 0.893). The cluster obtained from the degree centrality analysis was then used as a seed region in a voxel-wise functional connectivity analysis (Biswal et al., 1995). A significant group by time interaction (tmax = 4.45, p < 0.005, corrected for cluster size) emerged in the right occipital cortex (+24, -81, +15; +24, -93, +12; +33, -90, +3). The cortico-subcortical mean functional connectivity got significantly stronger in the group that took part in the reading program (z = 3.77, p < 0.001) but not in the group that remained illiterate (z = 0.77, p = 0.441). Individual slopes of cortico-subcortical connectivity were significantly associated with the improvement in letter knowledge (r = 0.40, p = 0.014) and with the improvement word reading ability (r = 0.38, p = 0.018). CONCLUSION Intrinsic hemodynamic activity changes driven by literacy occurred in subcortical low-level relay stations of the visual pathway and their functional connections to the occipital cortex. Accordingly, the visual system of beginning readers appears to go through fundamental modulations at earlier processing stages than suggested by previous event-related fMRI experiments. Our results add a new dimension to current concepts of the brain basis of reading and raise novel questions regarding the neural origin of developmental dyslexia.
  • Ostarek, M., Ishag, A., & Huettig, F. (2016). Language comprehension does not require perceptual simulation. Poster presented at the 23rd Annual Meeting of the Cognitive Neuroscience Society (CNS 2016), New York, NY, USA.
  • Ostarek, M., & Huettig, F. (2016). Spoken words can make the invisible visible: Testing the involvement of low-level visual representations in spoken word processing. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. Although some neuroimaging evidence is consistent with such a prediction (Desai et al., 2009; Hwang et al., 2009; Lewis & Poeppel, 2014), these findings do not tell us much about the nature of the representations that were accessed. In the present study, we directly tested whether low-level visual cortex is involved in spoken word processing. Using continuous flash suppression we show that spoken words activate behaviorally relevant low-level visual representations and pin down the time-course of this effect to the first hundreds of milliseconds after word onset. We investigated whether participants (N=24) can detect otherwise invisible objects (presented for 400ms) when they are presented with the corresponding spoken word 200ms before the picture appears. We implemented a design in which all cue words appeared equally often in picture-present (50%) and picture-absent trials (50%). In half of the picture-present trials, the spoken word was congruent with the target picture ("bottle" -> picture of a bottle), while on the other half it was incongruent ("bottle" -> picture of a banana). All picture stimuli were evenly distributed over the experimental conditions to rule out low-level differences that can affect detectability regardless of the prime words. Our results showed facilitated detection for congruent vs. incongruent pictures in terms of hit rates (z=-2.33, p=0.02) and d'-scores (t=3.01, p<0.01). A second experiment (N=33) investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400ms after word onset and decays at around word offset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, i.e. what we see. More generally our findings fit best with the notion that spoken words activate modality-specific visual representations that are low-level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens (a signature of episodic memory) but also for generalizing to novel exemplars one has never seen before.
  • Ostarek, M., & Huettig, F. (2016). Spoken words can make the invisible visible: Testing the involvement of low-level visual representations in spoken word processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. Although some neuroimaging evidence is consistent with such a prediction (Desai et al., 2009; Hwang et al., 2009; Lewis & Poeppel, 2014), these findings do not tell us much about the nature of the representations that were accessed. In the present study, we directly tested whether low-level visual cortex is involved in spoken word processing. Using continuous flash suppression we show that spoken words activate behaviorally relevant low-level visual representations and pin down the time-course of this effect to the first hundreds of milliseconds after word onset. We investigated whether participants (N=24) can detect otherwise invisible objects (presented for 400ms) when they are presented with the corresponding spoken word 200ms before the picture appears. We implemented a design in which all cue words appeared equally often in picture-present (50%) and picture-absent trials (50%). In half of the picture-present trials, the spoken word was congruent with the target picture ("bottle" -> picture of a bottle), while on the other half it was incongruent ("bottle" -> picture of a banana). All picture stimuli were evenly distributed over the experimental conditions to rule out low-level differences that can affect detectability regardless of the prime words. Our results showed facilitated detection for congruent vs. incongruent pictures in terms of hit rates (z=-2.33, p=0.02) and d'-scores (t=3.01, p<0.01). A second experiment (N=33) investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400ms after word onset and decays at around word offset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, i.e. what we see. More generally our findings fit best with the notion that spoken words activate modality-specific visual representations that are low-level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens (a signature of episodic memory) but also for generalizing to novel exemplars one has never seen before.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2016). Testing alternative architectures for multimodal integration during spoken language processing in the visual world. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Current cognitive models of spoken word recognition and comprehension are underspecified with respect to when and how multimodal information interacts. We compare two computational models both of which permit the integration of concurrent information within linguistic and non-linguistic processing streams, however their architectures differ critically in the level at which multimodal information interacts. We compare the predictions of the Multimodal Integration Model (MIM) of language processing (Smith, Monaghan & Huettig, 2014), which implements full interactivity between modalities, to a model in which interaction between modalities is restricted to lexical representations which we represent by an extended multimodal version of the TRACE model of spoken word recognition (McClelland & Elman, 1986). Our results demonstrate that previous visual world data sets involving phonological onset similarity are compatible with both models, whereas our novel experimental data on rhyme similarity is able to distinguish between competing architectures. The fully interactive MIM system correctly predicts a greater influence of visual and semantic information relative to phonological rhyme information on gaze behaviour, while by contrast a system that restricts multimodal interaction to the lexical level overestimates the influence of phonological rhyme, thereby providing an upper limit for when information interacts in multimodal tasks
  • Smith, A. C., Monaghan, P., & Huettig, F. (2016). The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Current cognitive models of spoken word recognition and comprehension are underspecified with respect to when and how multimodal information interacts. We compare two computational models both of which permit the integration of concurrent information within linguistic and non-linguistic processing streams, however their architectures differ critically in the level at which multimodal information interacts. We compare the predictions of the Multimodal Integration Model (MIM) of language processing (Smith, Monaghan & Huettig, 2014), which implements full interactivity between modalities, to a model in which interaction between modalities is restricted to lexical representations which we represent by an extended multimodal version of the TRACE model of spoken word recognition (McClelland & Elman, 1986). Our results demonstrate that previous visual world data sets involving phonological onset similarity are compatible with both models, whereas our novel experimental data on rhyme similarity is able to distinguish between competing architectures. The fully interactive MIM system correctly predicts a greater influence of visual and semantic information relative to phonological rhyme information on gaze behaviour, while by contrast a system that restricts multimodal interaction to the lexical level overestimates the influence of phonological rhyme, thereby providing an upper limit for when information interacts in multimodal tasks.
  • Huettig, F., & Gastel, A. (2010). Language-mediated eye movements and attentional control: Phonological and semantic competition effects are contigent upon scene complexity. Poster presented at the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010], York, UK.
  • Rommers, J., Huettig, F., & Meyer, A. S. (2010). Task-dependency in the activation of visual representations during language comprehension. Poster presented at The Embodied Mind: Perspectives and Limitations, Nijmegen, The Netherlands.
  • Rommers, J., Huettig, F., & Meyer, A. S. (2010). Task-dependent activation of visual representations during language comprehension. Poster presented at The 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010], York, UK.

Share this page