Falk Huettig

Presentations

Displaying 1 - 15 of 15
  • Garrido Rodriguez, G., Huettig, F., Norcliffe, E., Brown, P., & Levinson, S. C. (2017). Participant assignment to thematic roles in Tzeltal: Eye tracking evidence from sentence comprehension in a verb-initial language. Poster presented at the workshop 'Event Representations in Brain, Language & Development' (EvRep), Nijmegen, The Netherlands.
  • Ostarek, M., Van Paridon, J., Evans, S., & Huettig, F. (2017). Processing of up/down words recruits the cortical oculomotor network. Poster presented at the 24th Annual Meeting of the Cognitive Neuroscience Society, San Francisco, CA, USA.
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Literacy as a proxy for experience: Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Lai, V. T., & Huettig, F. (2013). When anticipation meets emotion: EEG evidence for distinct processing mechanisms. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language, San Diego, US.
  • Lai, V. T., & Huettig, F. (2013). When anticipation meets emotion: EEG evidence for distinct processing mechanisms. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Both phonological grain-size and general processing speed determine literacy related differences in language mediated eye gaze: Evidence from a connectionist model. Poster presented at The 18th Conference of the European Society for Cognitive Psychology [ESCOP 2013], Budapest, Hungary.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Semantic and visual competition eliminates the influence of rhyme overlap in spoken language processing. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013], Marseille, France.
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2012). Attentional capture by working memory content: When do words guide attention?. Poster presented at the 3rd Symposium on “Visual Search and Selective Attention” (VSSA III), Munich, Germany.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2012). Looking at nothing facilitates memory retrieval. Poster presented at Donders Discussions 2012, Nijmegen (NL).

    Abstract

    When processing visual objects, we integrate visual, linguistic and spatial information to form an episodic trace. Re-activating one aspect of the episodic trace of an object re-activates the entire bundle making all integrated information available. Using the blank screen paradigm [1], researchers observed that upon processing spoken linguistic input, participants tended to make eye movements on a blank screen, fixating locations that were previously occupied by objects mentioned in the linguistic utterance or were related. Ferreira and colleagues [2] suggested that 'looking at nothing' facilitated memory retrieval. However, this claim lacks convincing empirical support. In Experiment 1, Dutch participants looked at four-object-displays. Three objects were related to a spoken target word. Given the target word 'beker' (beaker), the display featured a phonological (a bear), a shape (a bobbin), a semantic (a fork) competitor, and an unrelated distractor (an umbrella). Participants were asked to name the objects as fast as possible. Subsequently, the objects disappeared. Participants fixated the center of the screen and listened to the target word. They had to carry out a semantic judgment task (indicating in which position an object had appeared that was semantically related to the objects) or a visual shape similarity judgment (indicating the position of the object similar in shape to the target). In both conditions, we observed that participants re-fixated the empty target location before responding. The set-up of Experiment 2 was identical except that we asked participants to maintain fixating the center of the screen while listening to the spoken word and responding. Performance accuracy was significantly lower in Experiment 2 than in Experiment 1. The results indicate that memory retrieval for objects is impaired when participants are not allowed to look at relevant, though empty locations. [1] Altmann, G. (2004). Language-mediated eye movements in the absence of a visual world: the 'blank screen paradigm'. Cognition, 93(2), B79-B87. [2] Ferreira, F., Apel, J., & Henderson, J. M. (2008). Taking a new look at looking at nothing. Trends Cogn Sci, 12(11), 405-410.
  • Hintz, F., & Huettig, F. (2012). Phonological word-object mapping is contingent upon the nature of the visual environment. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012), Riva del Garda, Italy.

    Abstract

    Four eye-tracking experiments investigated the impact of the nature of the visual environment on the likelihood of word-object mapping taking place at a phonological level of representation during languagemediated visual search. Dutch participants heard single spoken target words while looking at four objects embedded in displays of different complexity and were asked to indicate the presence or absence of the target object. During filler trials the target objects were present, but during experimental trials they were absent and the display contained various competitor objects. For example, given the target word 'beaker', the display contained a phonological (a beaver, bever), a shape (a bobbin, klos), a semantic (a fork, vork) competitor, and an unrelated distractor (an umbrella, paraplu). When objects were embedded in semi-realistic scenes including four human-like characters (Experiment 1, 3, and 4a), there were no biases in looks to phonological competitors even when the objects' contours were highlighted (Experiment 3) and an object naming task was administered right before the eye-tracking experiment (Experiment 4a). In all three experiments however we observed evidence for inhibition in looks to phonological competitors, which suggests that the phonological forms of the objects had been retrieved. When objects were presented in simple four-object displays (Experiments 2 and 4b) there were clear attentional biases to phonological competitors replicating earlier research (Huettig & McQueen, 2007). These findings suggest that phonological word-object mapping is contingent upon the nature of the visual environment and add to a growing body of evidence that the nature of our visual surroundings induces particular modes of processing during language-mediated visual search. References Huettig, F., & McQueen, J. M. (2007). The tug of war between phonological, semantic and shape information in language-mediated visual search. Journal of Memory and Language, 57(4), 460-482. doi: 10.1016/j.jml.2007.02.001
  • Huettig, F., & Janse, E. (2012). Anticipatory eye movements are modulated by working memory capacity: Evidence from older adults. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012), Riva del Garda, Italy.
  • Mani, N., & Huettig, F. (2012). Prediction during language processing is a piece of cake – but only for skilled producers. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.

    Abstract

    Background Adults orient towards an image of a cake upon hearing sentences such as “The boy will eat the cake” even before hearing the word cake, i.e., soon after they hear the verb EAT (Kamide et al., 2003). This finding has been taken to suggest that verb processing includes prediction of nouns that qualify as arguments for these verbs. Upon hearing the verb EAT, adults and young children (three- to ten-year-olds; Borovsky et al., in press) anticipate upcoming linguistic input in keeping with this verb’s selectional restrictions and use this to orient towards images of thematically appropriate arguments.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2012). Multimodal interaction in a model of visual world phenomena. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012), Riva del Garda, Italy.

    Abstract

    Existing computational models of the Visual World Paradigm (VWP) have simulated the connection between language processing and eye gaze behavior, and consequently have provided insight into the cognitive processes underlying lexical and sentence comprehension. Allopenna, Magnuson and Tanenhaus (1998), demonstrated that fixation probabilities during spoken word processing can be predicted by lexical activations in the TRACE model of spoken word recognition. Recent computational models have extended this work to predict fixation behavior during sentence processing from the integration of visual and linguistic information. Recent empirical investigation of word level effects in VWP support claims that language mediated eye gaze is not only influenced by overlap at a phonological level (Allopenna, Magnuson & Tanenhaus, 1998) but also by relationships in terms of visual and semantic similarity. Huettig and McQueen (2007) found that when participants heard a word and viewed a scene containing objects phonologically, visually, or semantically similar to the target, then all competitors exerted an effect on fixations, but fixations to phonological competitors preceded those to other competitors. Current models of VWP that simulate the interaction between visual and linguistic information do so with representations that are unable to capture fine-grained semantic, phonological or visual feature relationships. They are therefore limited in their ability to examine effects of multimodal interactions in language processing. Our research extends that of previous models by implementing representations in each modality that are sufficiently rich to capture similarities and distinctions in visual, phonological and semantic representations. Our starting point was to determine the extent to which multimodal interactions between these modalities in the VWP would be emergent from the nature of the representations themselves, rather than determined by architectural constraints. We constructed a recurrent connectionist model, based on Hub-and-spoke models of semantic processing, which integrates visual, phonological and semantic information within a central resource. We trained and tested the model on viewing scenes as in Huettig and McQueen’s (2007) study, and found that the model replicated the complex behaviour and time course dynamics of multimodal interaction, such that the model activated phonological competitors prior to activating visual and semantic competitors. Our approach enables us to determine that differences in the computational properties of each modality’s representational structure is sufficient to produce behaviour consistent with the VWP. The componential nature of phonological representations and the holistic structure of visual and semantic representations result in fixations to phonological competitors preceding those to other competitors. Our findings suggest such language-mediated visual attention phenomena can emerge due to the statistics of the problem domain, with observed behaviour emerging as a natural consequence of differences in the structure of information within each modality, without requiring additional modality specific architectural constraints.
  • Brouwer, S., Mitterer, H., & Huettig, F. (2009). Listeners reconstruct reduced forms during spontaneous speech: Evidence from eye movements. Poster presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009), Barcelona, Spain.
  • Brouwer, S., Mitterer, H., & Huettig, F. (2009). Phonological competition during the recognition of spontaneous speech: Effects of linguistic context and spectral cues. Poster presented at 157th Meeting of the Acoustical Society of America, Portland, OR.

    Abstract

    How do listeners recognize reduced forms that occur in spontaneous speech, such as “puter” for “computer”? To this end, eye-tracking experiments were performed in which participants heard a sentence and saw four printed words on a computer screen. The auditory stimuli contained canonical and reduced forms from a spontaneous speech corpus in different amounts of linguistic context. The four printed words were a “canonical form” competitor e.g., “companion”, phonologically similar to “computer”, a “reduced form” competitor e.g., “pupil”, phonologically similar to “puter” and two unrelated distractors. The results showed, first, that reduction inhibits word recognition overall. Second, listeners look more often to the “reduced form” competitor than to the “canonical form” competitor when reduced forms are presented in isolation or in a phonetic context. In full context, however, both competitors attracted looks: early rise of the “reduced form” competitor and late rise of the “canonical form” competitor. This “late rise” of the “canonical form” competitor was not observed when we replaced the original /p/ from “puter” with a real onset /p/. This indicates that phonetic detail and semantic/syntactic context are necessary for the recognition of reduced forms.

Share this page