Falk Huettig

Presentations

Displaying 1 - 38 of 38
  • Favier, S., Meyer, A. S., & Huettig, F. (2019). Does literacy predict individual differences in syntactic processing?. Talk presented at the International Workshop on Literacy and Writing systems: Cultural, Neuropsychological and Psycholinguistic Perspectives. Haifa, Israel. 2019-02-18 - 2019-02-20.
  • Favier, S., Wright, A., Meyer, A. S., & Huettig, F. (2019). Proficiency modulates between- but not within-language structural priming. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.
  • Hintz, F., Ostarek, M., De Nijs, M., Joosen, D., & Huettig, F. (2019). N’Sync or A’Sync? The role of timing when acquiring spoken and written word forms in a tonal language. Poster presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019), Tenerife, Spain.

    Abstract

    Theories of reading propose that the quality of word form representations affects reading comprehension. One claim is that synchronous retrieval of orthographic and phonological representations leads to better performance than asynchronous retrieval. Based on this account, one may hypothesize that synchronous rather than asynchronous presentation of orthographic and phonological forms should be beneficial when establishing the mapping between both, as it should lead to tighter couplings. We tested this hypothesis in two multi-session experiments, where participants studied isolated words of a tonal language unknown to them, Chinese. During study, written (using Pinyin transcription) and spoken word forms were presented simultaneously or in asynchronous fashion (audio-first, written-first). In both experiments, we observed an advantage for asynchronous over synchronous presentation at test, with audio-first presentation being most beneficial. These results suggest that the timing of written and spoken word forms has profound effects on the ease of learning a new tonal language.
  • Huettig, F. (2019). Six challenges for embodiment research [keynote]. Talk presented at the 12th annual Conference on Embodied and Situated Language Processing and the sixth AttLis (ESLP/AttLis 2019). Berlin, Germany. 2019-08-28 - 2019-08-30.
  • Ostarek, M., Alday, P. M., Gawel, O., Wolfgruber, J., Knudsen, B., Mantegna, F., & Huettig, F. (2019). Is neural entrainment a basic mechanism for structure building?. Poster presented at the Eleventh Annual Meeting of the Society for the Neurobiology of Language (SNL 2019), Helsinki, Finland.
  • Ostarek, M., & Huettig, F. (2019). Towards a unified theory of semantic cognition. Talk presented at the 21st Meeting of the European Society for Cognitive Psychology (ESCoP 2019). Tenerife, Spain. 2019-09-25 - 2019-09-28.
  • Araújo, S., Huettig, F., & Meyer, A. S. (2016). What's the nature of the deficit underlying impaired naming? An eye-tracking study with dyslexic readers. Talk presented at IWORDD - International Workshop on Reading and Developmental Dyslexia. Bilbao, Spain. 2016-05-05 - 2016-05-07.

    Abstract

    Serial naming deficits have been identified as core symptoms of developmental dyslexia. A prominent hypothesis is that naming delays are due to inefficient phonological encoding, yet the exact nature of this underlying impairment remains largely underspecified. Here we used recordings of eye movements and word onset latencies to examine at what processing level the dyslexic naming deficit emerges: localized at an early stage of lexical encoding or rather later at the level of phonetic or motor planning. 23 dyslexic and 25 control adult readers were tested on a serial object naming task for 30 items and an analogous reading task, where phonological neighborhood density and word-frequency were manipulated. Results showed that both word properties influenced early stages of phonological activation (first fixation and first-pass duration) equally in both groups of participants. Moreover, in the control group any difficulty appeared to be resolved early in the reading process, while for dyslexic readers a processing disadvantage for low-frequency words and for words with sparse neighborhood also emerged in a measure that included late stages of output planning (eye-voice span). Thus, our findings suggest suboptimal phonetic and/or articulatory planning in dyslexia.
  • Eisner, F., Kumar, U., Mishra, R. K., Nand Tripathi, V., Guleria, A., Prakash Singh, J., & Huettig, F. (2016). Literacy acquisition drives hemispheric lateralization of reading. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2016). Bilbao, Spain. 2016-09-01 - 2016-09-03.

    Abstract

    Reading functions beyond early visual precessing are known to be lateralized to the left hemisphere, but how left-lateralization arises during literacy acquisition is an open question. Bilateral processing or rightward asymmetries have previously been associated with developmental dyslexia. However, it is unclear at present to what extent this lack of left-lateralization reflects differences in reading ability. In this study, a group of illiterate adults in rural India (N=29) participated in a literacy training program over the course of six months. fMRI measures were obtained before and after training on a number of different visual stimulus categories, including written sentences, false fonts, and object categories such as houses and faces. This training group was matched on demographic and socioeconomic variables to an illiterate no-training group and to low- and highly-literate control groups, who were also scanned twice but received no training (total N=90). In a cross-sectional analysis before training, reading ability was positively correlated with increased BOLD responses in a left-lateralized network including the dorsal and ventral visual streams for text and false fonts, but not for other types of visual stimuli. A longitudinal analysis of learning effects in the training group showed that beginning readers engage bilateral networks more than proficient readers. Lateralization of BOLD responses was further examined by calculating laterality indices in specific regions. We observed training-related changes in lateralization for processing written stimuli in a number of subregions in the dorsal and ventral visual streams, as well as in the cerebellum. Together with the cross-sectional results, these data suggest a causal relationship between reading ability and the degree of hemispheric asymmetry in processing written materials.
  • Eisner, F., Kumar, U., Mishra, R. K., Nand Tripathi, V., Guleria, A., Prakash Singh, J., & Huettig, F. (2016). Literacy acquisition drives hemispheric lateralization of reading. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    Reading functions beyond early visual precessing are known to be lateralized to the left hemisphere, but how left-lateralization arises during literacy acquisition is an open question. Bilateral processing or rightward asymmetries have previously been associated with developmental dyslexia. However, it is unclear at present to what extent this lack of left-lateralization reflects differences in reading ability. In this study, a group of illiterate adults in rural India (N=29) participated in a literacy training program over the course of six months. fMRI measures were obtained before and after training on a number of different visual stimulus categories, including written sentences, false fonts, and object categories such as houses and faces. This training group was matched on demographic and socioeconomic variables to an illiterate no-training group and to low- and highly-literate control groups, who were also scanned twice but received no training (total N=90). In a cross-sectional analysis before training, reading ability was positively correlated with increased BOLD responses in a left-lateralized network including the dorsal and ventral visual streams for text and false fonts, but not for other types of visual stimuli. A longitudinal analysis of learning effects in the training group showed that beginning readers engage bilateral networks more than proficient readers. Lateralization of BOLD responses was further examined by calculating laterality indices in specific regions. We observed training-related changes in lateralization for processing written stimuli in a number of subregions in the dorsal and ventral visual streams, as well as in the cerebellum. Together with the cross-sectional results, these data suggest a causal relationship between reading ability and the degree of hemispheric asymmetry in processing written materials.
  • Eisner, F., Kumar, U., Mishra, R. K., Nand Tripathi, V., Guleria, A., Prakash Singh, J., & Huettig, F. (2016). Literacy acquisition drives hemispheric lateralization of reading. Talk presented at the 31st International Congress of Psychology (ICP2016). Yokohoma, Japan. 2016-07-24 - 2016-07-29.

    Abstract

    Reading functions beyond early visual precessing are known to be lateralized to the left hemisphere, but how left-lateralization arises during literacy acquisition is an open question. Bilateral processing or rightward asymmetries have previously been associated with developmental dyslexia. However, it is unclear at present to what extent this lack of left-lateralization reflects differences in reading ability. In this study, a group of illiterate adults in rural India (N=29) participated in a literacy training program over the course of six months. fMRI measures were obtained before and after training on a number of different visual stimulus categories, including written sentences, false fonts, and object categories such as houses and faces. This training group was matched on demographic and socioeconomic variables to an illiterate no-training group and to low- and highly-literate control groups, who were also scanned twice but received no training (total N=90). In a cross-sectional analysis before training, reading ability was positively correlated with increased BOLD responses in a left-lateralized network including the dorsal and ventral visual streams for text and false fonts, but not for other types of visual stimuli. A longitudinal analysis of learning effects in the training group showed that beginning readers engage bilateral networks more than proficient readers. Lateralization of BOLD responses was further examined by calculating laterality indices in specific regions. We observed training-related changes in lateralization for processing written stimuli in a number of subregions in the dorsal and ventral visual streams, as well as in the cerebellum. Together with the cross-sectional results, these data suggest a causal relationship between reading ability and the degree of hemispheric asymmetry in processing written materials.
  • Huettig, F., Kumar, U., Mishra, R., Tripathi, V. N., Guleria, A., Prakash Singh, J., Eisner, F., & Skeide, M. A. (2016). Learning to read alters intrinsic cortico-subcortical cross-talk in the low-level visual system. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    INTRODUCTION fMRI findings have revealed the important insight that literacy-related learning triggers cognitive adaptation mechanisms manifesting themselves in increased BOLD responses during print processing tasks (Brem et al., 2010; Carreiras et al., 2009; Dehaene et al., 2010). It remains elusive, however, if the cortical plasticity effects of reading acquisition also lead to an intrinsic functional reorganization of neural circuits. METHODS Here, we used resting-state fMRI as a measure of domain-specific spontaneous neuronal activity to capture the impact of reading acquisition on the functional connectome (Honey et al., 2007; Lohmann et al., 2010; Raichle et al., 2001). In a controlled longitudinal intervention study, we taught 21 illiterate adults from Northern India for 6 months how to read Hindi scripts and compared their resting-state fMRI data with those acquired from a sample of 9 illiterates, matched for demographic and socioeconomic variables, that did not undergo such instruction. RESULTS Initially, we investigated at the whole-brain level, if the experience of becoming literate modifies network nodes of spontaneous hemodynamic activity. Therefore, we compared training-related differences in the degree centrality of BOLD signals between the groups (Zuo et al., 2012). A significant group by time interaction (tmax = 4.17, p < 0.005, corrected for cluster size) was found in a cluster extending from the right superior colliculus of the brainstem (+6, -30, -3) to the bilateral pulvinar nuclei of the thalamus (+6, -18, -3; -6, -21, -3). This interaction was characterized by a significant mean degree centrality increase in the trained group (t(1,20) = 8.55, p < 0.001) that did not appear in the untrained group which remained at its base level (t(1,8) = 0.14, p = 0.893). The cluster obtained from the degree centrality analysis was then used as a seed region in a voxel-wise functional connectivity analysis (Biswal et al., 1995). A significant group by time interaction (tmax = 4.45, p < 0.005, corrected for cluster size) emerged in the right occipital cortex (+24, -81, +15; +24, -93, +12; +33, -90, +3). The cortico-subcortical mean functional connectivity got significantly stronger in the group that took part in the reading program (z = 3.77, p < 0.001) but not in the group that remained illiterate (z = 0.77, p = 0.441). Individual slopes of cortico-subcortical connectivity were significantly associated with the improvement in letter knowledge (r = 0.40, p = 0.014) and with the improvement word reading ability (r = 0.38, p = 0.018). CONCLUSION Intrinsic hemodynamic activity changes driven by literacy occurred in subcortical low-level relay stations of the visual pathway and their functional connections to the occipital cortex. Accordingly, the visual system of beginning readers appears to go through fundamental modulations at earlier processing stages than suggested by previous event-related fMRI experiments. Our results add a new dimension to current concepts of the brain basis of reading and raise novel questions regarding the neural origin of developmental dyslexia.
  • Huettig, F. (2016). Is prediction necessary to understand language?. Talk presented at the RefNet Round Table conference. Aberdeen, Scotland. 2016-01-15 - 2016-01-16.

    Abstract

    Many psycholinguistic experiments suggest that prediction is an important characteristic of language processing. Some recent theoretical accounts in the cognitive sciences (e.g., Clark, 2013; Friston, 2010) and psycholinguistics (e.g., Dell & Chang, 2014) appear to suggest that prediction is even necessary to understand language. I will evaluate this proposal. I will first discuss several arguments that may appear to be in line with the notion that prediction is necessary for language processing. These arguments include that prediction provides a unified theoretical principle of the human mind and that it pervades cortical function. We discuss whether evidence of human abilities to detect statistical regularities is necessarily evidence for predictive processing and evaluate suggestions that prediction is necessary for language learning. Five arguments are then presented that question the claim that all language processing is predictive in nature. I point out that not all language users appear to predict language and that suboptimal input makes prediction often very challenging. Prediction, moreover, is strongly context-dependent and impeded by resource limitations. I will also argue that it may be problematic that most experimental evidence for predictive language processing comes from 'prediction-encouraging' experimental set-ups. Finally, I will discuss possible ways that may lead to a further resolution of this debate. We conclude that languages can be learned and understood in the absence of prediction. Claims that all language processing is predictive in nature are premature.
  • Huettig, F. (2016). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Psychology Department, University of Brussels. Brussels, Belgium. 2016-10.
  • Huettig, F., Kumar, U., Mishra, R. K., Tripathi, V., Guleria, A., Prakash Singh, J., & Eisner, F. (2016). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the International meeting of the Psychonomic Society. Granada, Spain. 2016-05-05 - 2016-05-08.

    Abstract

    How do human cultural inventions such as reading result in neural re-organization? In this first longitudinal study with young completely illiterate adult participants, we measured brain responses to speech, text, and other categories of visual stimuli with fMRI before and after a group of illiterate participants in India completed a literacy training program in which they learned to read and write Devanagari script. A literate and an illiterate no-training control group were matched to the training group in terms of socioeconomic background and were recruited from the same societal community in two villages of a rural area near Lucknow, India. This design permitted investigating effects of literacy cross-sectionally across groups before training (N=86) as well as longitudinally (training group N=25). The two analysis approaches yielded converging results: Literacy was associated with enhanced, left-lateralized responses to written text along the ventral stream (including lingual gyrus, fusiform gyrus, and parahippocampal gyrus), dorsal stream (intraparietal sulcus), and (pre-) motor systems (pre-central sulcus, supplementary motor area) and thalamus (pulvinar). Significantly reduced responses were observed bilaterally in the superior parietal lobe (precuneus) and in the right angular gyrus. These effects corroborate and extend previous findings from cross-sectional studies. However, effects of literacy were specific to written text and (to a lesser extent) to false fonts. We did not find any evidence for effects of literacy on responses in the auditory cortex in our Hindi-speaking participants. This raises questions about the extent to which phonological representations are altered by literacy acquisition.
  • Ostarek, M., Ishag, A., & Huettig, F. (2016). Language comprehension does not require perceptual simulation. Poster presented at the 23rd Annual Meeting of the Cognitive Neuroscience Society (CNS 2016), New York, NY, USA.
  • Ostarek, M., & Huettig, F. (2016). Sensory representations are causally involved in cognition but only when the task requires it. Talk presented at the 3rd Attentive Listener in the Visual World (AttLis) workshop. Potsdam, Germany. 2016-05-10 - 2016-05-11.
  • Ostarek, M., & Huettig, F. (2016). Spoken words can make the invisible visible: Testing the involvement of low-level visual representations in spoken word processing. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. Although some neuroimaging evidence is consistent with such a prediction (Desai et al., 2009; Hwang et al., 2009; Lewis & Poeppel, 2014), these findings do not tell us much about the nature of the representations that were accessed. In the present study, we directly tested whether low-level visual cortex is involved in spoken word processing. Using continuous flash suppression we show that spoken words activate behaviorally relevant low-level visual representations and pin down the time-course of this effect to the first hundreds of milliseconds after word onset. We investigated whether participants (N=24) can detect otherwise invisible objects (presented for 400ms) when they are presented with the corresponding spoken word 200ms before the picture appears. We implemented a design in which all cue words appeared equally often in picture-present (50%) and picture-absent trials (50%). In half of the picture-present trials, the spoken word was congruent with the target picture ("bottle" -> picture of a bottle), while on the other half it was incongruent ("bottle" -> picture of a banana). All picture stimuli were evenly distributed over the experimental conditions to rule out low-level differences that can affect detectability regardless of the prime words. Our results showed facilitated detection for congruent vs. incongruent pictures in terms of hit rates (z=-2.33, p=0.02) and d'-scores (t=3.01, p<0.01). A second experiment (N=33) investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400ms after word onset and decays at around word offset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, i.e. what we see. More generally our findings fit best with the notion that spoken words activate modality-specific visual representations that are low-level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens (a signature of episodic memory) but also for generalizing to novel exemplars one has never seen before.
  • Ostarek, M., & Huettig, F. (2016). Spoken words can make the invisible visible: Testing the involvement of low-level visual representations in spoken word processing. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.

    Abstract

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. Although some neuroimaging evidence is consistent with such a prediction (Desai et al., 2009; Hwang et al., 2009; Lewis & Poeppel, 2014), these findings do not tell us much about the nature of the representations that were accessed. In the present study, we directly tested whether low-level visual cortex is involved in spoken word processing. Using continuous flash suppression we show that spoken words activate behaviorally relevant low-level visual representations and pin down the time-course of this effect to the first hundreds of milliseconds after word onset. We investigated whether participants (N=24) can detect otherwise invisible objects (presented for 400ms) when they are presented with the corresponding spoken word 200ms before the picture appears. We implemented a design in which all cue words appeared equally often in picture-present (50%) and picture-absent trials (50%). In half of the picture-present trials, the spoken word was congruent with the target picture ("bottle" -> picture of a bottle), while on the other half it was incongruent ("bottle" -> picture of a banana). All picture stimuli were evenly distributed over the experimental conditions to rule out low-level differences that can affect detectability regardless of the prime words. Our results showed facilitated detection for congruent vs. incongruent pictures in terms of hit rates (z=-2.33, p=0.02) and d'-scores (t=3.01, p<0.01). A second experiment (N=33) investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400ms after word onset and decays at around word offset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, i.e. what we see. More generally our findings fit best with the notion that spoken words activate modality-specific visual representations that are low-level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens (a signature of episodic memory) but also for generalizing to novel exemplars one has never seen before.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2016). Testing alternative architectures for multimodal integration during spoken language processing in the visual world. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Current cognitive models of spoken word recognition and comprehension are underspecified with respect to when and how multimodal information interacts. We compare two computational models both of which permit the integration of concurrent information within linguistic and non-linguistic processing streams, however their architectures differ critically in the level at which multimodal information interacts. We compare the predictions of the Multimodal Integration Model (MIM) of language processing (Smith, Monaghan & Huettig, 2014), which implements full interactivity between modalities, to a model in which interaction between modalities is restricted to lexical representations which we represent by an extended multimodal version of the TRACE model of spoken word recognition (McClelland & Elman, 1986). Our results demonstrate that previous visual world data sets involving phonological onset similarity are compatible with both models, whereas our novel experimental data on rhyme similarity is able to distinguish between competing architectures. The fully interactive MIM system correctly predicts a greater influence of visual and semantic information relative to phonological rhyme information on gaze behaviour, while by contrast a system that restricts multimodal interaction to the lexical level overestimates the influence of phonological rhyme, thereby providing an upper limit for when information interacts in multimodal tasks
  • Smith, A. C., Monaghan, P., & Huettig, F. (2016). The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration. Poster presented at Architectures and Mechanisms for Language Processing (AMLaP 2016), Bilbao, Spain.

    Abstract

    Current cognitive models of spoken word recognition and comprehension are underspecified with respect to when and how multimodal information interacts. We compare two computational models both of which permit the integration of concurrent information within linguistic and non-linguistic processing streams, however their architectures differ critically in the level at which multimodal information interacts. We compare the predictions of the Multimodal Integration Model (MIM) of language processing (Smith, Monaghan & Huettig, 2014), which implements full interactivity between modalities, to a model in which interaction between modalities is restricted to lexical representations which we represent by an extended multimodal version of the TRACE model of spoken word recognition (McClelland & Elman, 1986). Our results demonstrate that previous visual world data sets involving phonological onset similarity are compatible with both models, whereas our novel experimental data on rhyme similarity is able to distinguish between competing architectures. The fully interactive MIM system correctly predicts a greater influence of visual and semantic information relative to phonological rhyme information on gaze behaviour, while by contrast a system that restricts multimodal interaction to the lexical level overestimates the influence of phonological rhyme, thereby providing an upper limit for when information interacts in multimodal tasks.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2016). The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration. Talk presented at the 15th Neural Computation and Psychology Workshop: Contemporary Neural Network Models (NCPW15). Philadelphia, PA, USA. 2016-08-08 - 2016-08-09.
  • Speed, L., Chen, J., Huettig, F., & Majid, A. (2016). Do classifier categories affect or reflect object concepts?. Talk presented at the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Philadelphia, PA, USA. 2016-08-10 - 2016-08-13.

    Abstract

    We conceptualize objects based on sensory and motor information gleaned from real-world experience. But to what extent is such conceptual information structured according to higher level linguistic features too? Here we investigate whether classifiers, a grammatical category, shape the conceptual representations of objects. In three experiments native Mandarin speakers (speakers of a classifier language) and native Dutch speakers (speakers of a language without classifiers) judged the similarity of a target object (presented as a word or picture) with four objects (presented as words or pictures). One object shared a classifier with the target, the other objects did not, serving as distractors. Across all experiments, participants judged the target object as more similar to the object with the shared classifier than distractor objects. This effect was seen in both Dutch and Mandarin speakers, and there was no difference between the two languages. Thus, even speakers of a non-classifier language are sensitive to object similarities underlying classifier systems, and using a classifier system does not exaggerate these similarities. This suggests that classifier systems simply reflect, rather than affect, conceptual structure.
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2012). Attentional capture by working memory content: When do words guide attention?. Poster presented at the 3rd Symposium on “Visual Search and Selective Attention” (VSSA III), Munich, Germany.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2012). Looking at nothing facilitates memory retrieval. Poster presented at Donders Discussions 2012, Nijmegen (NL).

    Abstract

    When processing visual objects, we integrate visual, linguistic and spatial information to form an episodic trace. Re-activating one aspect of the episodic trace of an object re-activates the entire bundle making all integrated information available. Using the blank screen paradigm [1], researchers observed that upon processing spoken linguistic input, participants tended to make eye movements on a blank screen, fixating locations that were previously occupied by objects mentioned in the linguistic utterance or were related. Ferreira and colleagues [2] suggested that 'looking at nothing' facilitated memory retrieval. However, this claim lacks convincing empirical support. In Experiment 1, Dutch participants looked at four-object-displays. Three objects were related to a spoken target word. Given the target word 'beker' (beaker), the display featured a phonological (a bear), a shape (a bobbin), a semantic (a fork) competitor, and an unrelated distractor (an umbrella). Participants were asked to name the objects as fast as possible. Subsequently, the objects disappeared. Participants fixated the center of the screen and listened to the target word. They had to carry out a semantic judgment task (indicating in which position an object had appeared that was semantically related to the objects) or a visual shape similarity judgment (indicating the position of the object similar in shape to the target). In both conditions, we observed that participants re-fixated the empty target location before responding. The set-up of Experiment 2 was identical except that we asked participants to maintain fixating the center of the screen while listening to the spoken word and responding. Performance accuracy was significantly lower in Experiment 2 than in Experiment 1. The results indicate that memory retrieval for objects is impaired when participants are not allowed to look at relevant, though empty locations. [1] Altmann, G. (2004). Language-mediated eye movements in the absence of a visual world: the 'blank screen paradigm'. Cognition, 93(2), B79-B87. [2] Ferreira, F., Apel, J., & Henderson, J. M. (2008). Taking a new look at looking at nothing. Trends Cogn Sci, 12(11), 405-410.
  • Hintz, F., & Huettig, F. (2012). Phonological word-object mapping is contingent upon the nature of the visual environment. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012), Riva del Garda, Italy.

    Abstract

    Four eye-tracking experiments investigated the impact of the nature of the visual environment on the likelihood of word-object mapping taking place at a phonological level of representation during languagemediated visual search. Dutch participants heard single spoken target words while looking at four objects embedded in displays of different complexity and were asked to indicate the presence or absence of the target object. During filler trials the target objects were present, but during experimental trials they were absent and the display contained various competitor objects. For example, given the target word 'beaker', the display contained a phonological (a beaver, bever), a shape (a bobbin, klos), a semantic (a fork, vork) competitor, and an unrelated distractor (an umbrella, paraplu). When objects were embedded in semi-realistic scenes including four human-like characters (Experiment 1, 3, and 4a), there were no biases in looks to phonological competitors even when the objects' contours were highlighted (Experiment 3) and an object naming task was administered right before the eye-tracking experiment (Experiment 4a). In all three experiments however we observed evidence for inhibition in looks to phonological competitors, which suggests that the phonological forms of the objects had been retrieved. When objects were presented in simple four-object displays (Experiments 2 and 4b) there were clear attentional biases to phonological competitors replicating earlier research (Huettig & McQueen, 2007). These findings suggest that phonological word-object mapping is contingent upon the nature of the visual environment and add to a growing body of evidence that the nature of our visual surroundings induces particular modes of processing during language-mediated visual search. References Huettig, F., & McQueen, J. M. (2007). The tug of war between phonological, semantic and shape information in language-mediated visual search. Journal of Memory and Language, 57(4), 460-482. doi: 10.1016/j.jml.2007.02.001
  • Hintz, F., & Huettig, F. (2012). Phonological word-object mapping is contingent upon the nature of the visual environment. Talk presented at Psycholinguistics in Flanders goes Dutch [PiF 2012]. Berg en Dal (NL). 2012-06-06 - 2012-06-07.
  • Huettig, F., & Janse, E. (2012). Anticipatory eye movements are modulated by working memory capacity: Evidence from older adults. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012), Riva del Garda, Italy.
  • Huettig, F., Singh, N., Singh, S., & Mishra, R. K. (2012). Language-mediated prediction is related to reading ability and formal literacy. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
  • Huettig, F. (2012). Literacy modulates language-mediated visual attention and prediction. Talk presented at the Center of Excellence Cognitive Interaction Technology (CITEC). Bielefeld, Germany. 2012-01-12.
  • Huettig, F. (2012). The nature and mechanisms of language-mediated anticipatory eye movements. Talk presented at the International symposium: The Attentive Listener in the Visual world: The Interaction of Language, Attention,Memory, and Vision. Allahabad, India. 2012-10-05 - 2012-10-06.
  • Mani, N., & Huettig, F. (2012). Prediction during language processing is a piece of cake – but only for skilled producers. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.

    Abstract

    Background Adults orient towards an image of a cake upon hearing sentences such as “The boy will eat the cake” even before hearing the word cake, i.e., soon after they hear the verb EAT (Kamide et al., 2003). This finding has been taken to suggest that verb processing includes prediction of nouns that qualify as arguments for these verbs. Upon hearing the verb EAT, adults and young children (three- to ten-year-olds; Borovsky et al., in press) anticipate upcoming linguistic input in keeping with this verb’s selectional restrictions and use this to orient towards images of thematically appropriate arguments.
  • Mani, N., & Huettig, F. (2012). Toddlers anticipate that we EAT cake. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2012). Object shape representations in the contents of predictions for upcoming words. Talk presented at Psycholinguistics in Flanders [PiF 2012]. Berg en Dal, The Netherlands. 2012-06-06 - 2012-06-07.
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2012). The content of predictions: Involvement of object shape representations in the anticipation of upcoming words. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
  • Rommers, J., Meyer, A. S., & Huettig, F. (2012). Predicting upcoming meaning involves specific contents and domain-general mechanisms. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012]. Riva del Garda, Italy. 2012-09-06 - 2012-09-08.

    Abstract

    In sentence comprehension, readers and listeners often anticipate upcoming information (e.g., Altmann & Kamide, 1999). We investigated two aspects of this process, namely 1) what is pre-activated when anticipating an upcoming word (the contents of predictions), and 2) which cognitive mechanisms are involved. The contents of predictions at the level of meaning could be restricted to functional semantic attributes (e.g., edibility; Altmann & Kamide, 1999). However, when words are processed other types of information can also be activated, such as object shape representations. It is unknown whether this type of information is already activated when upcoming words are predicted. Forty-five adult participants listened to predictable words in sentence contexts (e.g., "In 1969 Neil Armstrong was the first man to set foot on the moon.") while looking at visual displays of four objects. Their eye movements were recorded. There were three conditions: target present (e.g., a moon and three distractor objects that were unrelated to the predictable word in terms of semantics, shape, and phonology), shape competitor (e.g., a tomato and three unrelated distractors), and distractors only (e.g., rice and three other unrelated objects). Across lists, the same pictures and sentences were used in the different conditions. We found that participants already showed a significant bias for the target object (moon) over unrelated distractors several seconds before the target was mentioned, demonstrating that they were predicting. Importantly, there was also a smaller but significant shape competitor (tomato) preference starting at about a second before critical word onset, consistent with predictions involving the referent’s shape. The mechanisms of predictions could be specific to language tasks, or language could use processing principles that are also used in other domains of cognition. We investigated whether performance in non-linguistic prediction is related to prediction in language processing, taking an individual differences approach. In addition to the language processing task, the participants performed a simple cueing task (after Posner, Nissen, & Ogden, 1978). They pressed one of two buttons (left/right) to indicate the location of an X symbol on the screen. On half of the trials, the X was preceded by a neutral cue (+). On the other half, an arrow cue pointing left (<) or right (>) indicated the upcoming X's location with 80% validity (i.e., the arrow cue was correct 80% of the time). The SOA between cue and target was 500 ms. Prediction was quantified as the mean response latency difference between the neutral and valid condition. This measure correlated positively with individual participants' anticipatory target and shape competitor preference (r = .27; r = .45), and was a significant predictor of anticipatory looks in linear mixed-effects regression models of the data. Participants who showed more facilitation from the arrow cues predicted to a higher degree in the linguistic task. This suggests that prediction in language processing may use mechanisms that are also used in other domains of cognition. References Altmann, G. T. M., & Kamide, Y. (1999). Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition, 73(3), 247-264. Posner, M. I., Nissen, M. J., & Ogden, W. C. (1978). Attended and unattended processing modes: The role of set for spatial location. In: H.L. Pick, & I.J. Saltzman (Eds.), Modes of perceiving and processing information. Hillsdale, N.J.: Lawrence Erlbaum Associates.
  • Smith, A. C., Huettig, F., & Monaghan, P. (2012). Modelling multimodal interaction in language mediated eye gaze. Talk presented at the 13th Neural Computation and Psychology Workshop [NCPW13]. San Sebastian, Spain. 2012-07-12 - 2012-07-14.

    Abstract

    Hub-and-spoke models of semantic processing which integrate modality specific information within a central resource have proven successful in capturing a range of neuropsychological phenomena (Rogers et al, 2004; Dilkina et al, 2008). Within our study we investigate whether the scope of the Hub-and-spoke architectural framework can be extended to capture behavioural phenomena in other areas of cognition. The visual world paradigm (VWP) has contributed significantly to our understanding of the information and processes involved in spoken word recognition. In particular it has highlighted the importance of non-linguistic influences during language processing, indicating that combined information from vision, phonology, and semantics is evident in performance on such tasks (see Huettig, Rommers & Meyer, 2011). Huettig & McQueen (2007) demonstrated that participants’ fixations to objects presented within a single visual display varied systematically according to their phonological, semantic and visual relationship to a spoken target word. The authors argue that only an explanation allowing for influence from all three knowledge types is capable of accounting for the observed behaviour. To date computational models of the VWP (Allopenna et al, 1998; Mayberry et al, 2009; Kukona et al, 2011) have focused largely on linguistic aspects of the task and have therefore been unable to offer explanations for the growing body of experimental evidence emphasising the influence of non-linguistic information on spoken word recognition. Our study demonstrates that an emergent connectionist model, based on the Hub-and-spoke models of semantic processing, which integrates visual, phonological and functional information within a central resource, is able to capture the intricate time course dynamics of eye fixation behaviour reported in Huettig & McQueen (2007). Our findings indicate that such language mediated visual attention phenomena can emerge largely due to the statistics of the problem domain and may not require additional domain specific processing constraints.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2012). Multimodal interaction in a model of visual world phenomena. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012), Riva del Garda, Italy.

    Abstract

    Existing computational models of the Visual World Paradigm (VWP) have simulated the connection between language processing and eye gaze behavior, and consequently have provided insight into the cognitive processes underlying lexical and sentence comprehension. Allopenna, Magnuson and Tanenhaus (1998), demonstrated that fixation probabilities during spoken word processing can be predicted by lexical activations in the TRACE model of spoken word recognition. Recent computational models have extended this work to predict fixation behavior during sentence processing from the integration of visual and linguistic information. Recent empirical investigation of word level effects in VWP support claims that language mediated eye gaze is not only influenced by overlap at a phonological level (Allopenna, Magnuson & Tanenhaus, 1998) but also by relationships in terms of visual and semantic similarity. Huettig and McQueen (2007) found that when participants heard a word and viewed a scene containing objects phonologically, visually, or semantically similar to the target, then all competitors exerted an effect on fixations, but fixations to phonological competitors preceded those to other competitors. Current models of VWP that simulate the interaction between visual and linguistic information do so with representations that are unable to capture fine-grained semantic, phonological or visual feature relationships. They are therefore limited in their ability to examine effects of multimodal interactions in language processing. Our research extends that of previous models by implementing representations in each modality that are sufficiently rich to capture similarities and distinctions in visual, phonological and semantic representations. Our starting point was to determine the extent to which multimodal interactions between these modalities in the VWP would be emergent from the nature of the representations themselves, rather than determined by architectural constraints. We constructed a recurrent connectionist model, based on Hub-and-spoke models of semantic processing, which integrates visual, phonological and semantic information within a central resource. We trained and tested the model on viewing scenes as in Huettig and McQueen’s (2007) study, and found that the model replicated the complex behaviour and time course dynamics of multimodal interaction, such that the model activated phonological competitors prior to activating visual and semantic competitors. Our approach enables us to determine that differences in the computational properties of each modality’s representational structure is sufficient to produce behaviour consistent with the VWP. The componential nature of phonological representations and the holistic structure of visual and semantic representations result in fixations to phonological competitors preceding those to other competitors. Our findings suggest such language-mediated visual attention phenomena can emerge due to the statistics of the problem domain, with observed behaviour emerging as a natural consequence of differences in the structure of information within each modality, without requiring additional modality specific architectural constraints.
  • Smith, A. C., Huettig, F., & Monaghan, P. (2012). The Tug of War during spoken word recognition in our visual worlds. Talk presented at Psycholinguistics in Flanders 2012 [[PiF 2012]. Berg en Dal, NL. 2012-06-06 - 2012-06-07.

Share this page