Falk Huettig

Presentations

Displaying 1 - 35 of 35
  • Favier, S., Meyer, A. S., & Huettig, F. (2018). How does literacy influence syntactic processing in spoken language?. Talk presented at Psycholinguistics in Flanders (PiF 2018). Gent, Belgium. 2018-06-04 - 2018-06-05.
  • Garrido Rodriguez, G., Huettig, F., Norcliffe, E., Brown, P., & Levinson, S. C. (2018). Participant assignment to thematic roles in Tzeltal: Eye tracking evidence from sentence comprehension in a verb-initial language. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2018). Berlin, Germany. 2018-09-06 - 2018-09-08.
  • Huettig, F. (2018). How learning to read changes mind and brain [keynote]. Talk presented at Architectures and Mechanisms for Language Processing-Asia (AMLaP-Asia 2018). Telangana, India. 2018-02-01 - 2018-02-03.
  • Eisner, F., Kumar, U., Mishra, R. K., Nand Tripathi, V., Guleria, A., Singh, P., & Huettig, F. (2015). The effect of literacy acquisition on cortical and subcortical networks: A longitudinal approach. Talk presented at the 7th Annual Meeting of the Society for the Neurobiology of Language. Chicago, US. 2015-10-15 - 2015-10-17.

    Abstract

    How do human cultural inventions such as reading result in neural re-organization? Previous cross-sectional studies have reported extensive effects of literacy on the neural systems for vision and language (Dehaene et al [2010, Science], Castro-Caldas et al [1998, Brain], Petersson et al [1998, NeuroImage], Carreiras et al [2009, Nature]). In this first longitudinal study with completely illiterate participants, we measured brain responses to speech, text, and other categories of visual stimuli with fMRI before and after a group of illiterate participants in India completed a literacy training program in which they learned to read and write Devanagari script. A literate and an illiterate no-training control group were matched to the training group in terms of socioeconomic background and were recruited from the same societal community in two villages of a rural area near Lucknow, India. This design permitted investigating effects of literacy cross-sectionally across groups before training (N=86) as well as longitudinally (training group N=25). The two analysis approaches yielded converging results: Literacy was associated with enhanced, mainly left-lateralized responses to written text along the ventral stream (including lingual gyrus, fusiform gyrus, and parahippocampal gyrus), dorsal stream (intraparietal sulcus), and (pre-) motor systems (pre-central sulcus, supplementary motor area), thalamus (pulvinar), and cerebellum. Significantly reduced responses were observed bilaterally in the superior parietal lobe (precuneus) and in the right angular gyrus. These positive effects corroborate and extend previous findings from cross-sectional studies. However, effects of literacy were specific to written text and (to a lesser extent) to false fonts. Contrary to previous research, we found no direct evidence of literacy affecting the processing of other types of visual stimuli such as faces, tools, houses, and checkerboards. Furthermore, unlike in some previous studies, we did not find any evidence for effects of literacy on responses in the auditory cortex in our Hindi-speaking participants. We conclude that learning to read has a specific and extensive effect on the processing of written text along the visual pathways, including low-level thalamic nuclei, high-level systems in the intraparietal sulcus and the fusiform gyrus, and motor areas. The absence of an effect of literacy on responses in the auditory cortex in particular raises questions about the extent to which phonological representations in the auditory cortex are altered by literacy acquisition or recruited online during reading.
  • de Groot, F., Huettig, F., & Olivers, C. N. (2015). Semantic influences on visual attention. Talk presented at the 15th NVP Winter Conference. Egmond aan Zee, The Netherlands. 2015-12-17 - 2015-12-19.

    Abstract

    To what extent is visual attention driven by the semantics of individual objects, rather than by their visual appearance? To investigate this we continuously measured eye movements, while observers searched through displays of common objects for an aurally instructed target. On crucial trials, the target was absent, but the display contained object s that were either semantically or visually related to the target. We hypothesized that timing is crucial in the occurrence and strength of semantic influences on visual orienting, and therefore presented the target instruction either before, during, or af ter (memory - based search) picture onset. When the target instruction was presented before picture onset we found a substantial, but delayed bias in orienting towards semantically related objects as compared to visually related objects. However, this delay disappeared when the visual information was presented before the target instruction. Furthermore, the temporal dynamics of the semantic bias did not change in the absence of visual competition. These results po int to cascadic but independent influences of semantic and visual representations on attention. In addition. the results of the memory - based search studies suggest that visual and semantic biases only arise when the visual stimuli are present. Although we consistent ly found that people fixate at locat ions previously occupied by the target object (a replication of earlier findings), we did not find such biases for visually or semantically related objects. Overall, our studies show that the question whether visual orienting is driven by semantic c ontent is better rephrased as when visual orienting is driven by semantic content.
  • de Groot, F., Huettig, F., & Olivers, C. (2015). When meaning matters: The temporal dynamics of semantic influences on visual attention. Talk presented at the 23rd Annual Workshop on Object Perception, Attention, and Memory. Chigaco, USA. 2015-10-19.
  • Hintz, F., Meyer, A. S., & Huettig, F. (2015). Context-dependent employment of mechanisms in anticipatory language processing. Talk presented at the 15th NVP Winter Conference. Egmond aan Zee, The Netherlands. 2015-12-17 - 2015-12-19.
  • Huettig, F. (2015). Cause or effect? What commonalities between illiterates and individuals with dyslexia can tell us about dyslexia. Talk presented at the Reading in the Forest workshop. Annweiler, Germany. 2015-10-26 - 2015-10-28.

    Abstract

    I will discuss recent research with illiterates and individuals with dyslexia which suggests that many cognitive ‚defi ciencies‘ proposed as possible causes of dyslexia are simply a consequence of decreased reading experience. I will argue that in order to make further progress towards an understanding of the causes of dyslexia it is necessary to appropriately distinguish between cause and effect.
  • Huettig, F. (2015). Effekte der Literalität auf die Kognition. Talk presented at Die Abschlußtagung des Verbundprojekts Alpha plus Job. Bamberg, Germany. 2015-01.
  • Huettig, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Individual differences in language processing across the adult life span workshop. Nijmegen, The Netherlands. 2015-12-10 - 2015-12-11.
  • Huettig, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Psychology Department, University of York. York, UK. 2015-11.
  • Huettig, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Psychology Department, University of Leeds. Leeds, UK. 2015-11.
  • Huettig, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Psychology Department, University of Glasgow. Glasgow, Scotland. 2015-11.
  • Huettig, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Psychology Department, University of Edinburgh. Edinburgh, Scotland. 2015-09.
  • Huettig, F., Kumar, U., Mishra, R. K., Tripathi, V., Guleria, A., Prakash Singh, J., & Eisner, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the 21st Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2015). Valetta, Malta. 2015-09-03 - 2015-09-05.

    Abstract

    How do human cultural inventions such as reading result in neural re-organization? In this first longitudinal study with young completely illiterate adult participants, we measured brain responses to speech, text, and other categories of visual stimuli with fMRI before and after a group of illiterate participants in India completed a literacy training program in which they learned to read and write Devanagari script. A literate and an illiterate no-training control group were matched to the training group in terms of socioeconomic background and were recruited from the same societal community in two villages of a rural area near Lucknow, India. This design permitted investigating effects of literacy cross-sectionally across groups before training (N=86) as well as longitudinally (training group N=25). The two analysis approaches yielded converging results: Literacy was associated with enhanced, left-lateralized responses to written text along the ventral stream (including lingual gyrus, fusiform gyrus, and parahippocampal gyrus), dorsal stream (intraparietal sulcus), and (pre-) motor systems (pre-central sulcus, supplementary motor area) and thalamus (pulvinar). Significantly reduced responses were observed bilaterally in the superior parietal lobe (precuneus) and in the right angular gyrus. These effects corroborate and extend previous findings from cross-sectional studies. However, effects of literacy were specific to written text and (to a lesser extent) to false fonts. We did not find any evidence for effects of literacy on responses in the auditory cortex in our Hindi-speaking participants. This raises questions about the extent to which phonological representations are altered by literacy acquisition.
  • Huettig, F., Kumar, U., Mishra, R. K., Tripathi, V., Guleria, A., Prakash Singh, J., & Eisner, F. (2015). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015). Paphos, Cyprus. 2015-09-17 - 2015-09-20.

    Abstract

    How do human
    cultural
    inventions
    such as reading
    result
    in neural
    re-organization?
    In this first longitudinal
    study
    with young
    completely
    illiterate
    adult
    participants,
    we measured
    brain
    responses
    to speech,
    text, and other
    categories
    of visual
    stimuli
    with fMRI
    before
    and after a group
    of
    illiterate
    participants
    in India
    completed
    a literacy
    training
    program
    in which
    they learned
    to read and write
    Devanagari
    script.
    A literate
    and an illiterate
    no-training
    control
    group
    were
    matched
    to the
    training
    group
    in terms
    of socioeconomic
    background
    and were
    recruited
    from
    the same
    societal
    community
    in two villages
    of a
    rural area near Lucknow,
    India.
    This design
    permitted
    investigating
    effects
    of literacy
    cross-sectionally
    across
    groups
    before
    training
    (N=86)
    as well as longitudinally
    (training
    group
    N=25).
    The two
    analysis
    approaches
    yielded
    converging
    results:
    Literacy
    was
    associated
    with enhanced,
    left-lateralized
    responses
    to written
    text
    along
    the ventral
    stream
    (including
    lingual
    gyrus,
    fusiform
    gyrus,
    and parahippocampal
    gyrus),
    dorsal
    stream
    (intraparietal
    sulcus),
    and (pre-)
    motor
    systems
    (pre-central
    sulcus,
    supplementary
    motor
    area)
    and thalamus
    (pulvinar).
    Significantly
    reduced
    responses
    were observed
    bilaterally
    in the superior
    parietal
    lobe (precuneus)
    and in the right angular
    gyrus.
    These
    effects
    corroborate
    and extend
    previous
    findings
    from
    cross-sectional
    studies.
    However,
    effects
    of literacy
    were
    specific
    to written
    text and (to a lesser
    extent)
    to
    false fonts.
    We did not find any evidence
    for effects
    of literacy
    on
    responses
    in the auditory
    cortex
    in our Hindi-speaking
    participants.
    This
    raises
    questions
    about
    the extent
    to which
    phonological
    representations are altered by literacy acquisition.
  • Mani, N., Daum, M., & Huettig, F. (2015). “Pro-active” in Many Ways: Evidence for Multiple Mechanisms in Prediction. Talk presented at the Biennial Meeting of the Society for Research in Child Development (SRCD 2015). Philadelphia, Pennsylvania, USA. 2015-03-19 - 2015-03-21.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2016). The effects of orthographic transparency on the reading system: Insights from a computational model of reading development. Talk presented at the Experimental Psychology Society, London Meeting. London, U.K. 2016-01-06 - 2016-01-08.
  • Hintz, F., & Huettig, F. (2012). Phonological word-object mapping is contingent upon the nature of the visual environment. Talk presented at Psycholinguistics in Flanders goes Dutch [PiF 2012]. Berg en Dal (NL). 2012-06-06 - 2012-06-07.
  • Huettig, F., Singh, N., Singh, S., & Mishra, R. K. (2012). Language-mediated prediction is related to reading ability and formal literacy. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
  • Huettig, F. (2012). Literacy modulates language-mediated visual attention and prediction. Talk presented at the Center of Excellence Cognitive Interaction Technology (CITEC). Bielefeld, Germany. 2012-01-12.
  • Huettig, F. (2012). The nature and mechanisms of language-mediated anticipatory eye movements. Talk presented at the International symposium: The Attentive Listener in the Visual world: The Interaction of Language, Attention,Memory, and Vision. Allahabad, India. 2012-10-05 - 2012-10-06.
  • Mani, N., & Huettig, F. (2012). Toddlers anticipate that we EAT cake. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2012). Object shape representations in the contents of predictions for upcoming words. Talk presented at Psycholinguistics in Flanders [PiF 2012]. Berg en Dal, The Netherlands. 2012-06-06 - 2012-06-07.
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2012). The content of predictions: Involvement of object shape representations in the anticipation of upcoming words. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
  • Rommers, J., Meyer, A. S., & Huettig, F. (2012). Predicting upcoming meaning involves specific contents and domain-general mechanisms. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012]. Riva del Garda, Italy. 2012-09-06 - 2012-09-08.

    Abstract

    In sentence comprehension, readers and listeners often anticipate upcoming information (e.g., Altmann & Kamide, 1999). We investigated two aspects of this process, namely 1) what is pre-activated when anticipating an upcoming word (the contents of predictions), and 2) which cognitive mechanisms are involved. The contents of predictions at the level of meaning could be restricted to functional semantic attributes (e.g., edibility; Altmann & Kamide, 1999). However, when words are processed other types of information can also be activated, such as object shape representations. It is unknown whether this type of information is already activated when upcoming words are predicted. Forty-five adult participants listened to predictable words in sentence contexts (e.g., "In 1969 Neil Armstrong was the first man to set foot on the moon.") while looking at visual displays of four objects. Their eye movements were recorded. There were three conditions: target present (e.g., a moon and three distractor objects that were unrelated to the predictable word in terms of semantics, shape, and phonology), shape competitor (e.g., a tomato and three unrelated distractors), and distractors only (e.g., rice and three other unrelated objects). Across lists, the same pictures and sentences were used in the different conditions. We found that participants already showed a significant bias for the target object (moon) over unrelated distractors several seconds before the target was mentioned, demonstrating that they were predicting. Importantly, there was also a smaller but significant shape competitor (tomato) preference starting at about a second before critical word onset, consistent with predictions involving the referent’s shape. The mechanisms of predictions could be specific to language tasks, or language could use processing principles that are also used in other domains of cognition. We investigated whether performance in non-linguistic prediction is related to prediction in language processing, taking an individual differences approach. In addition to the language processing task, the participants performed a simple cueing task (after Posner, Nissen, & Ogden, 1978). They pressed one of two buttons (left/right) to indicate the location of an X symbol on the screen. On half of the trials, the X was preceded by a neutral cue (+). On the other half, an arrow cue pointing left (<) or right (>) indicated the upcoming X's location with 80% validity (i.e., the arrow cue was correct 80% of the time). The SOA between cue and target was 500 ms. Prediction was quantified as the mean response latency difference between the neutral and valid condition. This measure correlated positively with individual participants' anticipatory target and shape competitor preference (r = .27; r = .45), and was a significant predictor of anticipatory looks in linear mixed-effects regression models of the data. Participants who showed more facilitation from the arrow cues predicted to a higher degree in the linguistic task. This suggests that prediction in language processing may use mechanisms that are also used in other domains of cognition. References Altmann, G. T. M., & Kamide, Y. (1999). Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition, 73(3), 247-264. Posner, M. I., Nissen, M. J., & Ogden, W. C. (1978). Attended and unattended processing modes: The role of set for spatial location. In: H.L. Pick, & I.J. Saltzman (Eds.), Modes of perceiving and processing information. Hillsdale, N.J.: Lawrence Erlbaum Associates.
  • Smith, A. C., Huettig, F., & Monaghan, P. (2012). Modelling multimodal interaction in language mediated eye gaze. Talk presented at the 13th Neural Computation and Psychology Workshop [NCPW13]. San Sebastian, Spain. 2012-07-12 - 2012-07-14.

    Abstract

    Hub-and-spoke models of semantic processing which integrate modality specific information within a central resource have proven successful in capturing a range of neuropsychological phenomena (Rogers et al, 2004; Dilkina et al, 2008). Within our study we investigate whether the scope of the Hub-and-spoke architectural framework can be extended to capture behavioural phenomena in other areas of cognition. The visual world paradigm (VWP) has contributed significantly to our understanding of the information and processes involved in spoken word recognition. In particular it has highlighted the importance of non-linguistic influences during language processing, indicating that combined information from vision, phonology, and semantics is evident in performance on such tasks (see Huettig, Rommers & Meyer, 2011). Huettig & McQueen (2007) demonstrated that participants’ fixations to objects presented within a single visual display varied systematically according to their phonological, semantic and visual relationship to a spoken target word. The authors argue that only an explanation allowing for influence from all three knowledge types is capable of accounting for the observed behaviour. To date computational models of the VWP (Allopenna et al, 1998; Mayberry et al, 2009; Kukona et al, 2011) have focused largely on linguistic aspects of the task and have therefore been unable to offer explanations for the growing body of experimental evidence emphasising the influence of non-linguistic information on spoken word recognition. Our study demonstrates that an emergent connectionist model, based on the Hub-and-spoke models of semantic processing, which integrates visual, phonological and functional information within a central resource, is able to capture the intricate time course dynamics of eye fixation behaviour reported in Huettig & McQueen (2007). Our findings indicate that such language mediated visual attention phenomena can emerge largely due to the statistics of the problem domain and may not require additional domain specific processing constraints.
  • Smith, A. C., Huettig, F., & Monaghan, P. (2012). The Tug of War during spoken word recognition in our visual worlds. Talk presented at Psycholinguistics in Flanders 2012 [[PiF 2012]. Berg en Dal, NL. 2012-06-06 - 2012-06-07.
  • Huettig, F., Singh, N., & Mishra, R. (2010). Language-mediated prediction is contingent upon formal literacy. Talk presented at Brain, Speech and Orthography Workshop. Brussels, Belgium. 2010-10-15 - 2010-10-16.

    Abstract

    A wealth of research has demonstrated that prediction is a core feature of human information processing. Much less is known, however, about the nature and the extent of predictive processing abilities. Here we investigated whether high levels of language expertise attained through formal literacy are related to anticipatory language-mediated visual orienting. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed to encourage anticipatory eye movements to visual target objects. High literates started to shift their eye gaze to the target object well before target word onset. In the low literacy group this shift of eye gaze occurred more than a second later, well after the onset of the target. Our findings suggest that formal literacy is crucial for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as language-mediated visual orienting.
  • Huettig, F. (2010). Looking, language, and memory. Talk presented at Language, Cognition, and Emotion Workshop. Delhi, India. 2010-12-06 - 2010-12-06.
  • Huettig, F. (2010). Toddlers’ language-mediated visual search: They need not have the words for it. Talk presented at International Conference on Cognitive Development 2010. Allahabad, India. 2010-12-10 - 2010-12-13.

    Abstract

    Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-month-olds lacking colour-term knowledge nonetheless recognised the perceptual-conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.
  • Huettig, F., & McQueen, J. M. (2009). AM radio noise changes the dynamics of spoken word recognition. Talk presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009). Barcelona, Spain. 2009-09-09.

    Abstract

    Language processing does not take place in isolation from the sensory environment. Listeners are able to recognise spoken words in many different situations, ranging from carefully articulated and noise-free laboratory speech, through casual conversational speech in a quiet room, to degraded conversational speech in a busy train-station. For listeners to be able to recognize speech optimally in each of these listening situations, they must be able to adapt to the constraints of each situation. We investigated this flexibility by comparing the dynamics of the spoken-word recognition process in clear speech and speech disrupted by radio noise. In Experiment 1, Dutch participants listened to clearly articulated spoken Dutch sentences which each included a critical word while their eye movements to four visual objects presented on a computer screen were measured. There were two critical conditions. In the first, the objects included a cohort competitor (e.g., parachute, “parachute”) with the same onset as the critical spoken word (e.g., paraplu, “umbrella”) and three unrelated distractors. In the second condition, a rhyme competitor (e.g., hamer, “hammer”) of the critical word (e.g., kamer, “room”) was present in the display, again with three distractors. To maximize competitor effects pictures of the critical words themselves were not present in the displays on the experimental trials (e.g.,there was no umbrella in the display with the 'paraplu' sentence) and a passive listening task was used (Huettig McQueen, 2007). Experiment 2 was identical to Experiment 1 except that phonemes in the spoken sentences were replaced with radio-signal noises (as in AM radio listening conditions). In each sentence, two,three or four phonemes were replaced with noises. The sentential position of these replacements was unpredictable, but the adjustments were always made to onset phonemes. The critical words (and the immediately surrounding words) were not changed. The question was whether listeners could learn that, under these circumstances, onset information is less reliable. We predicted that participants would look less at the cohort competitors (the initial match to the competitor is less good) and more at the rhyme competitors (the initial mismatch is less bad). We observed a significant experiment by competitor type interaction. In Experiment 1 participants fixated both kinds competitors more than unrelated distractors, but there were more and earlier looks to cohort competitors than to rhyme competitors (Allopenna et al., 1998). In Experiment 2 participants still fixated cohort competitors more than rhyme competitors but the early cohort effect was reduced and the rhyme effect was stronger and occurred earlier. These results suggest that AM radio noise changes the dynamics of spoken word recognition. The well-attested finding of stronger reliance on word onset overlap in speech recognition appears to be due in part to the use of clear speech in most experiments. When onset information becomes less reliable, listeners appear to depend on it less. A core feature of the speech-recognition system thus appears to be its flexibility. Listeners are able to adjust the perceptual weight they assign to different parts of incoming spoken language.
  • Huettig, F. (2009). Language-mediated visual search. Talk presented at Invited talk at VU Amsterdam. Amsterdam.
  • Huettig, F. (2009). On the use of distributional models of semantic space to investigate human cognition. Talk presented at Distributional Semantics beyond Concrete Concepts (Workshop at Annual Meeting of the Cognitive Science Society (CogSci 2009). Amsterdam, The Netherlands. 2009-07-29 - 2009-01-08.
  • Huettig, F. (2009). The role of colour during language-vision interactions. Talk presented at International Conference on Language-Cognition Interface 2009. Allahabad, India. 2009-12-06 - 2009-12-09.

Share this page