Falk Huettig

Presentations

Displaying 1 - 38 of 38
  • Huettig, F. (2013). Anticipatory eye movements and predictive language processing. Talk presented at the ZiF research group on "Competition and Priority Control in Mind and Brain. Bielefeld, Germany. 2013-07.
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Literacy as a proxy for experience: Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Huettig, F., Mishra, R. K., Kumar, U., Singh, J. P., Guleria, A., & Tripathi, V. (2013). Phonemic and syllabic awareness of adult literates and illiterates in an Indian alphasyllabic language. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Talk presented at the 54th Annual Meeting of the Psychonomic Society. Toronto, Canada. 2013-11-14 - 2013-11-17.

    Abstract

    c
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Talk presented at the 11th International Symposium of Psycholinguistics. Tenerife, Spain. 2013-03-20 - 2013-03-23.

    Abstract

    c
  • Janse, E., Huettig, F., & Jesse, A. (2013). Working memory modulates the immediate use of context for recognizing words in sentences. Talk presented at the 5th Workshop on Speech in Noise: Intelligibility and Quality. Vitoria, Spain. 2013-01-10 - 2013-01-11.
  • Lai, V. T., & Huettig, F. (2013). When anticipation meets emotion: EEG evidence for distinct processing mechanisms. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Lai, V. T., & Huettig, F. (2013). When anticipation meets emotion: EEG evidence for distinct processing mechanisms. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language, San Diego, US.
  • Mani, N., & Huettig, F. (2013). Reading ability predicts anticipatory language processing in 8 year olds. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2013). Anticipating references to objects during sentence comprehension. Talk presented at the Experimental Psychology Society meeting (EPS). Bangor, UK. 2013-07-03 - 2013-07-05.
  • Rommers, J., Meyer, A. S., Piai, V., & Huettig, F. (2013). Constraining the involvement of language production in comprehension: A comparison of object naming and object viewing in sentence context. Talk presented at the 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013]. Marseille, France. 2013-09-02 - 2013-09-04.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Both phonological grain-size and general processing speed determine literacy related differences in language mediated eye gaze: Evidence from a connectionist model. Poster presented at The 18th Conference of the European Society for Cognitive Psychology [ESCOP 2013], Budapest, Hungary.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Semantic and visual competition eliminates the influence of rhyme overlap in spoken language processing. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013], Marseille, France.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Modelling the effect of literacy on multimodal interactions during spoken language processing in the visual world. Talk presented at Tagung experimentell arbeitender Psychologen. [TEAP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.

    Abstract

    Recent empirical evidence suggests that language-mediated eye gaze around the visual world varies across individuals and is partly determined by their level of formal literacy training. Huettig, Singh & Mishra (2011) showed that unlike high-literate individuals, whose eye gaze was closely time locked to phonological overlap between a spoken target word and items presented in a visual display, low-literate individuals eye gaze was not tightly locked to phonological overlap in the speech signal but instead strongly influenced by semantic relationships between items. Our present study tests the hypothesis that this behaviour is an emergent property of an increased ability to extract phonological structure from the speech signal, as in the case of high-literates, with low-literates more reliant on syllabic structure. This hypothesis was tested using an emergent connectionist model, based on the Hub-and-spoke models of semantic processing (Dilkina et al, 2008), that integrates linguistic information extracted from the speech signal with visual and semantic information within a central resource. We demonstrate that contrasts in fixation behaviour similar to those observed between high and low literates emerge when the model is trained on either a speech signal segmented by phoneme (i.e. high-literates) or by syllable (i.e. low-literates).
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Phonological grain size and general processing speed modulates language mediated visual attention – Evidence from a connectionist model. Talk presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013]. Marseille, France. 2013-09-02 - 2013-09-04.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Putting rhyme in context: Visual and semantic competition eliminates phonological rhyme effects in language-mediated eye gaze. Talk presented at The 18th Conference of the European Society for Cognitive Psychology [ESCOP 2013]. Budapest, Hungary. 2013-08-29 - 2013-09-01.
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2012). Attentional capture by working memory content: When do words guide attention?. Poster presented at the 3rd Symposium on “Visual Search and Selective Attention” (VSSA III), Munich, Germany.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2012). Looking at nothing facilitates memory retrieval. Poster presented at Donders Discussions 2012, Nijmegen (NL).

    Abstract

    When processing visual objects, we integrate visual, linguistic and spatial information to form an episodic trace. Re-activating one aspect of the episodic trace of an object re-activates the entire bundle making all integrated information available. Using the blank screen paradigm [1], researchers observed that upon processing spoken linguistic input, participants tended to make eye movements on a blank screen, fixating locations that were previously occupied by objects mentioned in the linguistic utterance or were related. Ferreira and colleagues [2] suggested that 'looking at nothing' facilitated memory retrieval. However, this claim lacks convincing empirical support. In Experiment 1, Dutch participants looked at four-object-displays. Three objects were related to a spoken target word. Given the target word 'beker' (beaker), the display featured a phonological (a bear), a shape (a bobbin), a semantic (a fork) competitor, and an unrelated distractor (an umbrella). Participants were asked to name the objects as fast as possible. Subsequently, the objects disappeared. Participants fixated the center of the screen and listened to the target word. They had to carry out a semantic judgment task (indicating in which position an object had appeared that was semantically related to the objects) or a visual shape similarity judgment (indicating the position of the object similar in shape to the target). In both conditions, we observed that participants re-fixated the empty target location before responding. The set-up of Experiment 2 was identical except that we asked participants to maintain fixating the center of the screen while listening to the spoken word and responding. Performance accuracy was significantly lower in Experiment 2 than in Experiment 1. The results indicate that memory retrieval for objects is impaired when participants are not allowed to look at relevant, though empty locations. [1] Altmann, G. (2004). Language-mediated eye movements in the absence of a visual world: the 'blank screen paradigm'. Cognition, 93(2), B79-B87. [2] Ferreira, F., Apel, J., & Henderson, J. M. (2008). Taking a new look at looking at nothing. Trends Cogn Sci, 12(11), 405-410.
  • Hintz, F., & Huettig, F. (2012). Phonological word-object mapping is contingent upon the nature of the visual environment. Talk presented at Psycholinguistics in Flanders goes Dutch [PiF 2012]. Berg en Dal (NL). 2012-06-06 - 2012-06-07.
  • Hintz, F., & Huettig, F. (2012). Phonological word-object mapping is contingent upon the nature of the visual environment. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012), Riva del Garda, Italy.

    Abstract

    Four eye-tracking experiments investigated the impact of the nature of the visual environment on the likelihood of word-object mapping taking place at a phonological level of representation during languagemediated visual search. Dutch participants heard single spoken target words while looking at four objects embedded in displays of different complexity and were asked to indicate the presence or absence of the target object. During filler trials the target objects were present, but during experimental trials they were absent and the display contained various competitor objects. For example, given the target word 'beaker', the display contained a phonological (a beaver, bever), a shape (a bobbin, klos), a semantic (a fork, vork) competitor, and an unrelated distractor (an umbrella, paraplu). When objects were embedded in semi-realistic scenes including four human-like characters (Experiment 1, 3, and 4a), there were no biases in looks to phonological competitors even when the objects' contours were highlighted (Experiment 3) and an object naming task was administered right before the eye-tracking experiment (Experiment 4a). In all three experiments however we observed evidence for inhibition in looks to phonological competitors, which suggests that the phonological forms of the objects had been retrieved. When objects were presented in simple four-object displays (Experiments 2 and 4b) there were clear attentional biases to phonological competitors replicating earlier research (Huettig & McQueen, 2007). These findings suggest that phonological word-object mapping is contingent upon the nature of the visual environment and add to a growing body of evidence that the nature of our visual surroundings induces particular modes of processing during language-mediated visual search. References Huettig, F., & McQueen, J. M. (2007). The tug of war between phonological, semantic and shape information in language-mediated visual search. Journal of Memory and Language, 57(4), 460-482. doi: 10.1016/j.jml.2007.02.001
  • Huettig, F., & Janse, E. (2012). Anticipatory eye movements are modulated by working memory capacity: Evidence from older adults. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012), Riva del Garda, Italy.
  • Huettig, F. (2012). Literacy modulates language-mediated visual attention and prediction. Talk presented at the Center of Excellence Cognitive Interaction Technology (CITEC). Bielefeld, Germany. 2012-01-12.
  • Huettig, F., Singh, N., Singh, S., & Mishra, R. K. (2012). Language-mediated prediction is related to reading ability and formal literacy. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
  • Huettig, F. (2012). The nature and mechanisms of language-mediated anticipatory eye movements. Talk presented at the International symposium: The Attentive Listener in the Visual world: The Interaction of Language, Attention,Memory, and Vision. Allahabad, India. 2012-10-05 - 2012-10-06.
  • Mani, N., & Huettig, F. (2012). Prediction during language processing is a piece of cake – but only for skilled producers. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.

    Abstract

    Background Adults orient towards an image of a cake upon hearing sentences such as “The boy will eat the cake” even before hearing the word cake, i.e., soon after they hear the verb EAT (Kamide et al., 2003). This finding has been taken to suggest that verb processing includes prediction of nouns that qualify as arguments for these verbs. Upon hearing the verb EAT, adults and young children (three- to ten-year-olds; Borovsky et al., in press) anticipate upcoming linguistic input in keeping with this verb’s selectional restrictions and use this to orient towards images of thematically appropriate arguments.
  • Mani, N., & Huettig, F. (2012). Toddlers anticipate that we EAT cake. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2012). Object shape representations in the contents of predictions for upcoming words. Talk presented at Psycholinguistics in Flanders [PiF 2012]. Berg en Dal, The Netherlands. 2012-06-06 - 2012-06-07.
  • Rommers, J., Meyer, A. S., & Huettig, F. (2012). Predicting upcoming meaning involves specific contents and domain-general mechanisms. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012]. Riva del Garda, Italy. 2012-09-06 - 2012-09-08.

    Abstract

    In sentence comprehension, readers and listeners often anticipate upcoming information (e.g., Altmann & Kamide, 1999). We investigated two aspects of this process, namely 1) what is pre-activated when anticipating an upcoming word (the contents of predictions), and 2) which cognitive mechanisms are involved. The contents of predictions at the level of meaning could be restricted to functional semantic attributes (e.g., edibility; Altmann & Kamide, 1999). However, when words are processed other types of information can also be activated, such as object shape representations. It is unknown whether this type of information is already activated when upcoming words are predicted. Forty-five adult participants listened to predictable words in sentence contexts (e.g., "In 1969 Neil Armstrong was the first man to set foot on the moon.") while looking at visual displays of four objects. Their eye movements were recorded. There were three conditions: target present (e.g., a moon and three distractor objects that were unrelated to the predictable word in terms of semantics, shape, and phonology), shape competitor (e.g., a tomato and three unrelated distractors), and distractors only (e.g., rice and three other unrelated objects). Across lists, the same pictures and sentences were used in the different conditions. We found that participants already showed a significant bias for the target object (moon) over unrelated distractors several seconds before the target was mentioned, demonstrating that they were predicting. Importantly, there was also a smaller but significant shape competitor (tomato) preference starting at about a second before critical word onset, consistent with predictions involving the referent’s shape. The mechanisms of predictions could be specific to language tasks, or language could use processing principles that are also used in other domains of cognition. We investigated whether performance in non-linguistic prediction is related to prediction in language processing, taking an individual differences approach. In addition to the language processing task, the participants performed a simple cueing task (after Posner, Nissen, & Ogden, 1978). They pressed one of two buttons (left/right) to indicate the location of an X symbol on the screen. On half of the trials, the X was preceded by a neutral cue (+). On the other half, an arrow cue pointing left (<) or right (>) indicated the upcoming X's location with 80% validity (i.e., the arrow cue was correct 80% of the time). The SOA between cue and target was 500 ms. Prediction was quantified as the mean response latency difference between the neutral and valid condition. This measure correlated positively with individual participants' anticipatory target and shape competitor preference (r = .27; r = .45), and was a significant predictor of anticipatory looks in linear mixed-effects regression models of the data. Participants who showed more facilitation from the arrow cues predicted to a higher degree in the linguistic task. This suggests that prediction in language processing may use mechanisms that are also used in other domains of cognition. References Altmann, G. T. M., & Kamide, Y. (1999). Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition, 73(3), 247-264. Posner, M. I., Nissen, M. J., & Ogden, W. C. (1978). Attended and unattended processing modes: The role of set for spatial location. In: H.L. Pick, & I.J. Saltzman (Eds.), Modes of perceiving and processing information. Hillsdale, N.J.: Lawrence Erlbaum Associates.
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2012). The content of predictions: Involvement of object shape representations in the anticipation of upcoming words. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2012). Multimodal interaction in a model of visual world phenomena. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012), Riva del Garda, Italy.

    Abstract

    Existing computational models of the Visual World Paradigm (VWP) have simulated the connection between language processing and eye gaze behavior, and consequently have provided insight into the cognitive processes underlying lexical and sentence comprehension. Allopenna, Magnuson and Tanenhaus (1998), demonstrated that fixation probabilities during spoken word processing can be predicted by lexical activations in the TRACE model of spoken word recognition. Recent computational models have extended this work to predict fixation behavior during sentence processing from the integration of visual and linguistic information. Recent empirical investigation of word level effects in VWP support claims that language mediated eye gaze is not only influenced by overlap at a phonological level (Allopenna, Magnuson & Tanenhaus, 1998) but also by relationships in terms of visual and semantic similarity. Huettig and McQueen (2007) found that when participants heard a word and viewed a scene containing objects phonologically, visually, or semantically similar to the target, then all competitors exerted an effect on fixations, but fixations to phonological competitors preceded those to other competitors. Current models of VWP that simulate the interaction between visual and linguistic information do so with representations that are unable to capture fine-grained semantic, phonological or visual feature relationships. They are therefore limited in their ability to examine effects of multimodal interactions in language processing. Our research extends that of previous models by implementing representations in each modality that are sufficiently rich to capture similarities and distinctions in visual, phonological and semantic representations. Our starting point was to determine the extent to which multimodal interactions between these modalities in the VWP would be emergent from the nature of the representations themselves, rather than determined by architectural constraints. We constructed a recurrent connectionist model, based on Hub-and-spoke models of semantic processing, which integrates visual, phonological and semantic information within a central resource. We trained and tested the model on viewing scenes as in Huettig and McQueen’s (2007) study, and found that the model replicated the complex behaviour and time course dynamics of multimodal interaction, such that the model activated phonological competitors prior to activating visual and semantic competitors. Our approach enables us to determine that differences in the computational properties of each modality’s representational structure is sufficient to produce behaviour consistent with the VWP. The componential nature of phonological representations and the holistic structure of visual and semantic representations result in fixations to phonological competitors preceding those to other competitors. Our findings suggest such language-mediated visual attention phenomena can emerge due to the statistics of the problem domain, with observed behaviour emerging as a natural consequence of differences in the structure of information within each modality, without requiring additional modality specific architectural constraints.
  • Smith, A. C., Huettig, F., & Monaghan, P. (2012). Modelling multimodal interaction in language mediated eye gaze. Talk presented at the 13th Neural Computation and Psychology Workshop [NCPW13]. San Sebastian, Spain. 2012-07-12 - 2012-07-14.

    Abstract

    Hub-and-spoke models of semantic processing which integrate modality specific information within a central resource have proven successful in capturing a range of neuropsychological phenomena (Rogers et al, 2004; Dilkina et al, 2008). Within our study we investigate whether the scope of the Hub-and-spoke architectural framework can be extended to capture behavioural phenomena in other areas of cognition. The visual world paradigm (VWP) has contributed significantly to our understanding of the information and processes involved in spoken word recognition. In particular it has highlighted the importance of non-linguistic influences during language processing, indicating that combined information from vision, phonology, and semantics is evident in performance on such tasks (see Huettig, Rommers & Meyer, 2011). Huettig & McQueen (2007) demonstrated that participants’ fixations to objects presented within a single visual display varied systematically according to their phonological, semantic and visual relationship to a spoken target word. The authors argue that only an explanation allowing for influence from all three knowledge types is capable of accounting for the observed behaviour. To date computational models of the VWP (Allopenna et al, 1998; Mayberry et al, 2009; Kukona et al, 2011) have focused largely on linguistic aspects of the task and have therefore been unable to offer explanations for the growing body of experimental evidence emphasising the influence of non-linguistic information on spoken word recognition. Our study demonstrates that an emergent connectionist model, based on the Hub-and-spoke models of semantic processing, which integrates visual, phonological and functional information within a central resource, is able to capture the intricate time course dynamics of eye fixation behaviour reported in Huettig & McQueen (2007). Our findings indicate that such language mediated visual attention phenomena can emerge largely due to the statistics of the problem domain and may not require additional domain specific processing constraints.
  • Smith, A. C., Huettig, F., & Monaghan, P. (2012). The Tug of War during spoken word recognition in our visual worlds. Talk presented at Psycholinguistics in Flanders 2012 [[PiF 2012]. Berg en Dal, NL. 2012-06-06 - 2012-06-07.
  • Huettig, F. (2010). Looking, language, and memory. Talk presented at Language, Cognition, and Emotion Workshop. Delhi, India. 2010-12-06 - 2010-12-06.
  • Huettig, F., & Gastel, A. (2010). Language-mediated eye movements and attentional control: Phonological and semantic competition effects are contigent upon scene complexity. Poster presented at the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010], York, UK.

    Files private

    Request files
  • Huettig, F., Singh, N., & Mishra, R. (2010). Language-mediated prediction is contingent upon formal literacy. Talk presented at Brain, Speech and Orthography Workshop. Brussels, Belgium. 2010-10-15 - 2010-10-16.

    Abstract

    A wealth of research has demonstrated that prediction is a core feature of human information processing. Much less is known, however, about the nature and the extent of predictive processing abilities. Here we investigated whether high levels of language expertise attained through formal literacy are related to anticipatory language-mediated visual orienting. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed to encourage anticipatory eye movements to visual target objects. High literates started to shift their eye gaze to the target object well before target word onset. In the low literacy group this shift of eye gaze occurred more than a second later, well after the onset of the target. Our findings suggest that formal literacy is crucial for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as language-mediated visual orienting.
  • Huettig, F. (2010). Toddlers’ language-mediated visual search: They need not have the words for it. Talk presented at International Conference on Cognitive Development 2010. Allahabad, India. 2010-12-10 - 2010-12-13.

    Abstract

    Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-month-olds lacking colour-term knowledge nonetheless recognised the perceptual-conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.
  • Rommers, J., Huettig, F., & Meyer, A. S. (2010). Task-dependency in the activation of visual representations during language comprehension. Poster presented at The Embodied Mind: Perspectives and Limitations, Nijmegen, The Netherlands.
  • Rommers, J., Huettig, F., & Meyer, A. S. (2010). Task-dependent activation of visual representations during language comprehension. Poster presented at The 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010], York, UK.

Share this page