Falk Huettig

Presentations

Displaying 1 - 19 of 19
  • De Groot, F., Huettig, F., & Olivers, C. (2015). When meaning matters: The temporal dynamics of semantic influences on visual attention. Poster presented at the Psychonomic Society's 56th Annual Meeting, Chicago, USA.
  • de Groot, F., Huettig, F., & Olivers, C. (2015). When meaning matters: The temporal dynamics of semantic influences on visual attention. Poster presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015), Paphos, Cyprus.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2015). Doing a production task encourages prediction: Evidence from interleaved object naming and sentence reading. Poster presented at the 28th Annual CUNY Conference on Human Sentence Processing, Los Angeles (CA, USA).

    Abstract

    Prominent theories of predictive language processing assume that language production processes are used to anticipate upcoming linguistic input during comprehension (Dell & Chang, 2014; Pickering & Garrod, 2013). Here, we explored the converse case: Does a task set including production in addition to comprehension encourage prediction, compared to a task only including comprehension? To test this hypothesis, participants carried out a cross-modal naming task (Exp 1a), a self-paced reading task (Exp1 b) that did not include overt production, and a task (Exp 1c) in which naming and reading trials were evenly interleaved. We used the same predictable (N = 40) and non-predictable (N = 40) sentences in all three tasks. The sentences consisted of a fixed agent, a transitive verb and a predictable or non-predictable target word (The man breaks a glass vs. The man borrows a glass). The mean cloze probability in the predictable sentences was .39 (ranging from .06 to .8; zero in the non-predictable sentences). A total of 162 volunteers took part in the experiment which was run in a between-participants design. In Exp 1a, fifty-four participants listened to recordings of the sentences which ended right before the spoken target word. Coinciding with the end of the playback, a picture of the target word was shown which the participants were asked to name as fast as possible. Analyses of their naming latencies revealed a statistically significant naming advantage of 108 ms on predictable over non-predictable trials. Moreover, we found that the objects’ naming advantage was predicted by the target words’ cloze probability in the sentences (r = .347, p = .038). In Exp 1b, 54 participants were asked to read the same sentences in a self-paced fashion. To allow for testing of potential spillover effects, we added a neutral prepositional phrase (breaks a glass from the collection/borrows a glass from the neighbor) to each sentence. The sentences were read word-by-word, advancing by pushing the space bar. On 30% of the trials, comprehension questions were used to keep up participants' focus on comprehending the sentences. Analyses of their spillover region reading times revealed a numerical advantage (8 ms; tspillover = -1.1, n.s.) in the predictable as compared to the non-predictable condition. Importantly, the analysis of participants' responses to the comprehension questions, showed that they understood the sentences (mean accuracy = 93%). In Exp 1c, the task comprised 50% naming trials and 50% reading trials which appeared in random order. Fifty-four participants named and read the same objects and sentences as in the previous versions. The results showed a naming advantage on predictable over non-predictable items (99 ms) and a positive correlation between the items’ cloze probability and their naming advantage (r = .322, p = .055). Crucially, the post-target reading time analysis showed that with naming trials and reading trials interleaved, there was also a statistically reliable prediction effect on reading trials. Participants were 19 ms faster at reading the spillover region on predictable relative to non-predictable items (tspillover = -2.624). To summarize, although we used the same sentences in all sub-experiments, we observed effects of prediction only when the task set involved production. In the reading only experiment (Exp 1b), no evidence for anticipation was obtained although participants clearly understood the sentences and the same sentences yielded reading facilitation when interleaved with naming trials (Exp 1c). This suggests that predictive language processing can be modulated by the comprehenders’ task set. When the task set involves language production, as is often the case in natural conversation, comprehenders appear to engage in prediction to a stronger degree than in pure comprehension tasks. In our discussion, we will discuss the notion that language production may engage prediction, because being able to predict words another person is about to say might optimize the comprehension process and enable smooth turn-taking.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2015). Event knowledge and word associations jointly influence predictive processing during discourse comprehension. Poster presented at the 28th Annual CUNY Conference on Human Sentence Processing, Los Angeles (CA, USA).

    Abstract

    A substantial body of literature has shown that readers and listeners often anticipate information. An open question concerns the mechanisms underlying predictive language processing. Multiple mechanisms have been suggested. One proposal is that comprehenders use event knowledge to predict upcoming words. Other theoretical frameworks propose that predictions are made based on simple word associations. In a recent EEG study, Metusalem and colleagues reported evidence for the modulating influence of event knowledge on prediction. They examined the degree to which event knowledge is activated during sentence comprehension. Their participants read two sentences, establishing an event scenario, which were followed by a final sentence containing one of three target words: a highly expected word, a semantically unexpected word that was related to the described event, or a semantically unexpected and event-unrelated word (see Figure, for an example). Analyses of participants’ ERPs elicited by the target words revealed a three-way split with regard to the amplitude of the N400 elicited by the different types of target: the expected targets elicited the smallest N400, the unexpected and event-unrelated targets elicited the largest N400. Importantly, the amplitude of the N400 elicited by the unexpected but event-related targets was significantly attenuated relative to the amplitude of the N400 elicited by the unexpected and event-unrelated targets. Metusalem et al. concluded that event knowledge is immediately available to constrain on-line language processing. Based on a post-hoc analysis, the authors rejected the possibility that the results could be explained by simple word associations. In the present study, we addressed the role of simple word associations in discourse comprehension more directly. Specifically, we explored the contribution of associative priming to the graded N400 pattern seen in Metusalem et al’s study. We conducted two EEG experiments. In Experiment 1, we reran Metusalem and colleagues’ context manipulation and closely replicated their results. In Experiment 2, we selected two words from the event-establishing sentences which were most strongly associated with the unexpected but event-related targets in the final sentences. Each of the two associates was then placed in a neutral carrier sentence. We controlled that none of the other words in these carrier sentences was associatively related to the target words. Importantly, the two carrier sentences did not build up a coherent event. We recorded EEG while participants read the carrier sentences followed by the same final sentences as in Experiment 1. The results showed that as in Experiment 1 the amplitude of the N400 elicited by both types of unexpected target words was larger than the N400 elicited by the highly expected target. Moreover, we found a global tendency towards the critical difference between event-related and event-unrelated unexpected targets which reached statistical significance only at parietal electrodes over the right hemisphere. Because the difference between event-related and event-unrelated conditions was larger when the sentences formed a coherent event compared to when they did not, our results suggest that associative priming alone cannot account for the N400 pattern observed in our Experiment 1 (and in the study by Metusalem et al.). However, because part of the effect remained, probably due to associative facilitation, the findings demonstrate that during discourse reading both event knowledge activation and simple word associations jointly contribute to the prediction process. The results highlight that multiple mechanisms underlie predictive language processing.
  • Huettig, F., & Guerra, E. (2015). Testing the limits of prediction in language processing: Prediction occurs but far from always. Poster presented at the 21st Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2015), Valetta, Malta.
  • Ostarek, M., & Huettig, F. (2015). Grounding language in the visual system: Visual noise interferes more with concrete than abstract word processing. Poster presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015), Paphos, Cyprus.
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Literacy as a proxy for experience: Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Lai, V. T., & Huettig, F. (2013). When anticipation meets emotion: EEG evidence for distinct processing mechanisms. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Lai, V. T., & Huettig, F. (2013). When anticipation meets emotion: EEG evidence for distinct processing mechanisms. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language, San Diego, US.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Both phonological grain-size and general processing speed determine literacy related differences in language mediated eye gaze: Evidence from a connectionist model. Poster presented at The 18th Conference of the European Society for Cognitive Psychology [ESCOP 2013], Budapest, Hungary.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Semantic and visual competition eliminates the influence of rhyme overlap in spoken language processing. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013], Marseille, France.
  • De Groot, F., Huettig, F., & Olivers, C. N. L. (2012). Attentional capture by working memory content: When do words guide attention?. Poster presented at the 3rd Symposium on “Visual Search and Selective Attention” (VSSA III), Munich, Germany.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2012). Looking at nothing facilitates memory retrieval. Poster presented at Donders Discussions 2012, Nijmegen (NL).

    Abstract

    When processing visual objects, we integrate visual, linguistic and spatial information to form an episodic trace. Re-activating one aspect of the episodic trace of an object re-activates the entire bundle making all integrated information available. Using the blank screen paradigm [1], researchers observed that upon processing spoken linguistic input, participants tended to make eye movements on a blank screen, fixating locations that were previously occupied by objects mentioned in the linguistic utterance or were related. Ferreira and colleagues [2] suggested that 'looking at nothing' facilitated memory retrieval. However, this claim lacks convincing empirical support. In Experiment 1, Dutch participants looked at four-object-displays. Three objects were related to a spoken target word. Given the target word 'beker' (beaker), the display featured a phonological (a bear), a shape (a bobbin), a semantic (a fork) competitor, and an unrelated distractor (an umbrella). Participants were asked to name the objects as fast as possible. Subsequently, the objects disappeared. Participants fixated the center of the screen and listened to the target word. They had to carry out a semantic judgment task (indicating in which position an object had appeared that was semantically related to the objects) or a visual shape similarity judgment (indicating the position of the object similar in shape to the target). In both conditions, we observed that participants re-fixated the empty target location before responding. The set-up of Experiment 2 was identical except that we asked participants to maintain fixating the center of the screen while listening to the spoken word and responding. Performance accuracy was significantly lower in Experiment 2 than in Experiment 1. The results indicate that memory retrieval for objects is impaired when participants are not allowed to look at relevant, though empty locations. [1] Altmann, G. (2004). Language-mediated eye movements in the absence of a visual world: the 'blank screen paradigm'. Cognition, 93(2), B79-B87. [2] Ferreira, F., Apel, J., & Henderson, J. M. (2008). Taking a new look at looking at nothing. Trends Cogn Sci, 12(11), 405-410.
  • Hintz, F., & Huettig, F. (2012). Phonological word-object mapping is contingent upon the nature of the visual environment. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012), Riva del Garda, Italy.

    Abstract

    Four eye-tracking experiments investigated the impact of the nature of the visual environment on the likelihood of word-object mapping taking place at a phonological level of representation during languagemediated visual search. Dutch participants heard single spoken target words while looking at four objects embedded in displays of different complexity and were asked to indicate the presence or absence of the target object. During filler trials the target objects were present, but during experimental trials they were absent and the display contained various competitor objects. For example, given the target word 'beaker', the display contained a phonological (a beaver, bever), a shape (a bobbin, klos), a semantic (a fork, vork) competitor, and an unrelated distractor (an umbrella, paraplu). When objects were embedded in semi-realistic scenes including four human-like characters (Experiment 1, 3, and 4a), there were no biases in looks to phonological competitors even when the objects' contours were highlighted (Experiment 3) and an object naming task was administered right before the eye-tracking experiment (Experiment 4a). In all three experiments however we observed evidence for inhibition in looks to phonological competitors, which suggests that the phonological forms of the objects had been retrieved. When objects were presented in simple four-object displays (Experiments 2 and 4b) there were clear attentional biases to phonological competitors replicating earlier research (Huettig & McQueen, 2007). These findings suggest that phonological word-object mapping is contingent upon the nature of the visual environment and add to a growing body of evidence that the nature of our visual surroundings induces particular modes of processing during language-mediated visual search. References Huettig, F., & McQueen, J. M. (2007). The tug of war between phonological, semantic and shape information in language-mediated visual search. Journal of Memory and Language, 57(4), 460-482. doi: 10.1016/j.jml.2007.02.001
  • Huettig, F., & Janse, E. (2012). Anticipatory eye movements are modulated by working memory capacity: Evidence from older adults. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012), Riva del Garda, Italy.
  • Mani, N., & Huettig, F. (2012). Prediction during language processing is a piece of cake – but only for skilled producers. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.

    Abstract

    Background Adults orient towards an image of a cake upon hearing sentences such as “The boy will eat the cake” even before hearing the word cake, i.e., soon after they hear the verb EAT (Kamide et al., 2003). This finding has been taken to suggest that verb processing includes prediction of nouns that qualify as arguments for these verbs. Upon hearing the verb EAT, adults and young children (three- to ten-year-olds; Borovsky et al., in press) anticipate upcoming linguistic input in keeping with this verb’s selectional restrictions and use this to orient towards images of thematically appropriate arguments.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2012). Multimodal interaction in a model of visual world phenomena. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012), Riva del Garda, Italy.

    Abstract

    Existing computational models of the Visual World Paradigm (VWP) have simulated the connection between language processing and eye gaze behavior, and consequently have provided insight into the cognitive processes underlying lexical and sentence comprehension. Allopenna, Magnuson and Tanenhaus (1998), demonstrated that fixation probabilities during spoken word processing can be predicted by lexical activations in the TRACE model of spoken word recognition. Recent computational models have extended this work to predict fixation behavior during sentence processing from the integration of visual and linguistic information. Recent empirical investigation of word level effects in VWP support claims that language mediated eye gaze is not only influenced by overlap at a phonological level (Allopenna, Magnuson & Tanenhaus, 1998) but also by relationships in terms of visual and semantic similarity. Huettig and McQueen (2007) found that when participants heard a word and viewed a scene containing objects phonologically, visually, or semantically similar to the target, then all competitors exerted an effect on fixations, but fixations to phonological competitors preceded those to other competitors. Current models of VWP that simulate the interaction between visual and linguistic information do so with representations that are unable to capture fine-grained semantic, phonological or visual feature relationships. They are therefore limited in their ability to examine effects of multimodal interactions in language processing. Our research extends that of previous models by implementing representations in each modality that are sufficiently rich to capture similarities and distinctions in visual, phonological and semantic representations. Our starting point was to determine the extent to which multimodal interactions between these modalities in the VWP would be emergent from the nature of the representations themselves, rather than determined by architectural constraints. We constructed a recurrent connectionist model, based on Hub-and-spoke models of semantic processing, which integrates visual, phonological and semantic information within a central resource. We trained and tested the model on viewing scenes as in Huettig and McQueen’s (2007) study, and found that the model replicated the complex behaviour and time course dynamics of multimodal interaction, such that the model activated phonological competitors prior to activating visual and semantic competitors. Our approach enables us to determine that differences in the computational properties of each modality’s representational structure is sufficient to produce behaviour consistent with the VWP. The componential nature of phonological representations and the holistic structure of visual and semantic representations result in fixations to phonological competitors preceding those to other competitors. Our findings suggest such language-mediated visual attention phenomena can emerge due to the statistics of the problem domain, with observed behaviour emerging as a natural consequence of differences in the structure of information within each modality, without requiring additional modality specific architectural constraints.
  • Brouwer, S., Mitterer, H., & Huettig, F. (2009). Listeners reconstruct reduced forms during spontaneous speech: Evidence from eye movements. Poster presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009), Barcelona, Spain.
  • Brouwer, S., Mitterer, H., & Huettig, F. (2009). Phonological competition during the recognition of spontaneous speech: Effects of linguistic context and spectral cues. Poster presented at 157th Meeting of the Acoustical Society of America, Portland, OR.

    Abstract

    How do listeners recognize reduced forms that occur in spontaneous speech, such as “puter” for “computer”? To this end, eye-tracking experiments were performed in which participants heard a sentence and saw four printed words on a computer screen. The auditory stimuli contained canonical and reduced forms from a spontaneous speech corpus in different amounts of linguistic context. The four printed words were a “canonical form” competitor e.g., “companion”, phonologically similar to “computer”, a “reduced form” competitor e.g., “pupil”, phonologically similar to “puter” and two unrelated distractors. The results showed, first, that reduction inhibits word recognition overall. Second, listeners look more often to the “reduced form” competitor than to the “canonical form” competitor when reduced forms are presented in isolation or in a phonetic context. In full context, however, both competitors attracted looks: early rise of the “reduced form” competitor and late rise of the “canonical form” competitor. This “late rise” of the “canonical form” competitor was not observed when we replaced the original /p/ from “puter” with a real onset /p/. This indicates that phonetic detail and semantic/syntactic context are necessary for the recognition of reduced forms.

Share this page