Falk Huettig

Presentations

Displaying 1 - 26 of 26
  • Garrido Rodriguez, G., Huettig, F., Norcliffe, E., Brown, P., & Levinson, S. C. (2017). Participant assignment to thematic roles in Tzeltal: Eye tracking evidence from sentence comprehension in a verb-initial language. Poster presented at the workshop 'Event Representations in Brain, Language & Development' (EvRep), Nijmegen, The Netherlands.
  • Ostarek, M., Van Paridon, J., & Huettig, F. (2017). Conceptual processing of up/down words (cloud/grass) recruits cortical oculomotor areas central for planning and executing saccadic eye movements. Talk presented at the 10th Embodied and Situated Language Processing Conference. Moscow, Russia. 2017-09-10 - 2017-09-12.
  • Ostarek, M., & Huettig, F. (2017). Grounding language in vision [Invited talk]. Talk presented at the University of California Davis. Davis, CA, USA.
  • Ostarek, M., Van Paridon, J., Evans, S., & Huettig, F. (2017). Processing of up/down words recruits the cortical oculomotor network. Poster presented at the 24th Annual Meeting of the Cognitive Neuroscience Society, San Francisco, CA, USA.
  • Huettig, F. (2013). Anticipatory eye movements and predictive language processing. Talk presented at the ZiF research group on "Competition and Priority Control in Mind and Brain. Bielefeld, Germany. 2013-07.
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Literacy as a proxy for experience: Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Huettig, F., Mishra, R. K., Kumar, U., Singh, J. P., Guleria, A., & Tripathi, V. (2013). Phonemic and syllabic awareness of adult literates and illiterates in an Indian alphasyllabic language. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Talk presented at the 54th Annual Meeting of the Psychonomic Society. Toronto, Canada. 2013-11-14 - 2013-11-17.

    Abstract

    c
  • Huettig, F., Mani, N., Mishra, R. K., & Brouwer, S. (2013). Reading ability predicts anticipatory language processing in children, low literate adults, and adults with dyslexia. Talk presented at the 11th International Symposium of Psycholinguistics. Tenerife, Spain. 2013-03-20 - 2013-03-23.

    Abstract

    c
  • Janse, E., Huettig, F., & Jesse, A. (2013). Working memory modulates the immediate use of context for recognizing words in sentences. Talk presented at the 5th Workshop on Speech in Noise: Intelligibility and Quality. Vitoria, Spain. 2013-01-10 - 2013-01-11.
  • Lai, V. T., & Huettig, F. (2013). When anticipation meets emotion: EEG evidence for distinct processing mechanisms. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013), Marseille, France.
  • Lai, V. T., & Huettig, F. (2013). When anticipation meets emotion: EEG evidence for distinct processing mechanisms. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language, San Diego, US.
  • Mani, N., & Huettig, F. (2013). Reading ability predicts anticipatory language processing in 8 year olds. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2013). Anticipating references to objects during sentence comprehension. Talk presented at the Experimental Psychology Society meeting (EPS). Bangor, UK. 2013-07-03 - 2013-07-05.
  • Rommers, J., Meyer, A. S., Piai, V., & Huettig, F. (2013). Constraining the involvement of language production in comprehension: A comparison of object naming and object viewing in sentence context. Talk presented at the 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013]. Marseille, France. 2013-09-02 - 2013-09-04.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Both phonological grain-size and general processing speed determine literacy related differences in language mediated eye gaze: Evidence from a connectionist model. Poster presented at The 18th Conference of the European Society for Cognitive Psychology [ESCOP 2013], Budapest, Hungary.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Semantic and visual competition eliminates the influence of rhyme overlap in spoken language processing. Poster presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013], Marseille, France.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Modelling the effect of literacy on multimodal interactions during spoken language processing in the visual world. Talk presented at Tagung experimentell arbeitender Psychologen. [TEAP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.

    Abstract

    Recent empirical evidence suggests that language-mediated eye gaze around the visual world varies across individuals and is partly determined by their level of formal literacy training. Huettig, Singh & Mishra (2011) showed that unlike high-literate individuals, whose eye gaze was closely time locked to phonological overlap between a spoken target word and items presented in a visual display, low-literate individuals eye gaze was not tightly locked to phonological overlap in the speech signal but instead strongly influenced by semantic relationships between items. Our present study tests the hypothesis that this behaviour is an emergent property of an increased ability to extract phonological structure from the speech signal, as in the case of high-literates, with low-literates more reliant on syllabic structure. This hypothesis was tested using an emergent connectionist model, based on the Hub-and-spoke models of semantic processing (Dilkina et al, 2008), that integrates linguistic information extracted from the speech signal with visual and semantic information within a central resource. We demonstrate that contrasts in fixation behaviour similar to those observed between high and low literates emerge when the model is trained on either a speech signal segmented by phoneme (i.e. high-literates) or by syllable (i.e. low-literates).
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Phonological grain size and general processing speed modulates language mediated visual attention – Evidence from a connectionist model. Talk presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2013]. Marseille, France. 2013-09-02 - 2013-09-04.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Putting rhyme in context: Visual and semantic competition eliminates phonological rhyme effects in language-mediated eye gaze. Talk presented at The 18th Conference of the European Society for Cognitive Psychology [ESCOP 2013]. Budapest, Hungary. 2013-08-29 - 2013-09-01.
  • Brouwer, S., Mitterer, H., & Huettig, F. (2009). Listeners reconstruct reduced forms during spontaneous speech: Evidence from eye movements. Poster presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009), Barcelona, Spain.
  • Brouwer, S., Mitterer, H., & Huettig, F. (2009). Phonological competition during the recognition of spontaneous speech: Effects of linguistic context and spectral cues. Poster presented at 157th Meeting of the Acoustical Society of America, Portland, OR.

    Abstract

    How do listeners recognize reduced forms that occur in spontaneous speech, such as “puter” for “computer”? To this end, eye-tracking experiments were performed in which participants heard a sentence and saw four printed words on a computer screen. The auditory stimuli contained canonical and reduced forms from a spontaneous speech corpus in different amounts of linguistic context. The four printed words were a “canonical form” competitor e.g., “companion”, phonologically similar to “computer”, a “reduced form” competitor e.g., “pupil”, phonologically similar to “puter” and two unrelated distractors. The results showed, first, that reduction inhibits word recognition overall. Second, listeners look more often to the “reduced form” competitor than to the “canonical form” competitor when reduced forms are presented in isolation or in a phonetic context. In full context, however, both competitors attracted looks: early rise of the “reduced form” competitor and late rise of the “canonical form” competitor. This “late rise” of the “canonical form” competitor was not observed when we replaced the original /p/ from “puter” with a real onset /p/. This indicates that phonetic detail and semantic/syntactic context are necessary for the recognition of reduced forms.
  • Huettig, F., & McQueen, J. M. (2009). AM radio noise changes the dynamics of spoken word recognition. Talk presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009). Barcelona, Spain. 2009-09-09.

    Abstract

    Language processing does not take place in isolation from the sensory environment. Listeners are able to recognise spoken words in many different situations, ranging from carefully articulated and noise-free laboratory speech, through casual conversational speech in a quiet room, to degraded conversational speech in a busy train-station. For listeners to be able to recognize speech optimally in each of these listening situations, they must be able to adapt to the constraints of each situation. We investigated this flexibility by comparing the dynamics of the spoken-word recognition process in clear speech and speech disrupted by radio noise. In Experiment 1, Dutch participants listened to clearly articulated spoken Dutch sentences which each included a critical word while their eye movements to four visual objects presented on a computer screen were measured. There were two critical conditions. In the first, the objects included a cohort competitor (e.g., parachute, “parachute”) with the same onset as the critical spoken word (e.g., paraplu, “umbrella”) and three unrelated distractors. In the second condition, a rhyme competitor (e.g., hamer, “hammer”) of the critical word (e.g., kamer, “room”) was present in the display, again with three distractors. To maximize competitor effects pictures of the critical words themselves were not present in the displays on the experimental trials (e.g.,there was no umbrella in the display with the 'paraplu' sentence) and a passive listening task was used (Huettig McQueen, 2007). Experiment 2 was identical to Experiment 1 except that phonemes in the spoken sentences were replaced with radio-signal noises (as in AM radio listening conditions). In each sentence, two,three or four phonemes were replaced with noises. The sentential position of these replacements was unpredictable, but the adjustments were always made to onset phonemes. The critical words (and the immediately surrounding words) were not changed. The question was whether listeners could learn that, under these circumstances, onset information is less reliable. We predicted that participants would look less at the cohort competitors (the initial match to the competitor is less good) and more at the rhyme competitors (the initial mismatch is less bad). We observed a significant experiment by competitor type interaction. In Experiment 1 participants fixated both kinds competitors more than unrelated distractors, but there were more and earlier looks to cohort competitors than to rhyme competitors (Allopenna et al., 1998). In Experiment 2 participants still fixated cohort competitors more than rhyme competitors but the early cohort effect was reduced and the rhyme effect was stronger and occurred earlier. These results suggest that AM radio noise changes the dynamics of spoken word recognition. The well-attested finding of stronger reliance on word onset overlap in speech recognition appears to be due in part to the use of clear speech in most experiments. When onset information becomes less reliable, listeners appear to depend on it less. A core feature of the speech-recognition system thus appears to be its flexibility. Listeners are able to adjust the perceptual weight they assign to different parts of incoming spoken language.
  • Huettig, F. (2009). Language-mediated visual search. Talk presented at Invited talk at VU Amsterdam. Amsterdam.
  • Huettig, F. (2009). On the use of distributional models of semantic space to investigate human cognition. Talk presented at Distributional Semantics beyond Concrete Concepts (Workshop at Annual Meeting of the Cognitive Science Society (CogSci 2009). Amsterdam, The Netherlands. 2009-07-29 - 2009-01-08.
  • Huettig, F. (2009). The role of colour during language-vision interactions. Talk presented at International Conference on Language-Cognition Interface 2009. Allahabad, India. 2009-12-06 - 2009-12-09.

Share this page