Falk Huettig

Presentations

Displaying 1 - 22 of 22
  • Favier, S., Meyer, A. S., & Huettig, F. (2018). How does literacy influence syntactic processing in spoken language?. Talk presented at Psycholinguistics in Flanders (PiF 2018). Gent, Belgium. 2018-06-04 - 2018-06-05.
  • Garrido Rodriguez, G., Huettig, F., Norcliffe, E., Brown, P., & Levinson, S. C. (2018). Participant assignment to thematic roles in Tzeltal: Eye tracking evidence from sentence comprehension in a verb-initial language. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2018). Berlin, Germany. 2018-09-06 - 2018-09-08.
  • Huettig, F. (2018). How learning to read changes mind and brain [keynote]. Talk presented at Architectures and Mechanisms for Language Processing-Asia (AMLaP-Asia 2018). Telangana, India. 2018-02-01 - 2018-02-03.
  • Ostarek, M., Van Paridon, J., & Huettig, F. (2017). Conceptual processing of up/down words (cloud/grass) recruits cortical oculomotor areas central for planning and executing saccadic eye movements. Talk presented at the 10th Embodied and Situated Language Processing Conference. Moscow, Russia. 2017-09-10 - 2017-09-12.
  • Ostarek, M., & Huettig, F. (2017). Grounding language in vision [Invited talk]. Talk presented at the University of California Davis. Davis, CA, USA.
  • Araújo, S., Huettig, F., & Meyer, A. S. (2016). What's the nature of the deficit underlying impaired naming? An eye-tracking study with dyslexic readers. Talk presented at IWORDD - International Workshop on Reading and Developmental Dyslexia. Bilbao, Spain. 2016-05-05 - 2016-05-07.

    Abstract

    Serial naming deficits have been identified as core symptoms of developmental dyslexia. A prominent hypothesis is that naming delays are due to inefficient phonological encoding, yet the exact nature of this underlying impairment remains largely underspecified. Here we used recordings of eye movements and word onset latencies to examine at what processing level the dyslexic naming deficit emerges: localized at an early stage of lexical encoding or rather later at the level of phonetic or motor planning. 23 dyslexic and 25 control adult readers were tested on a serial object naming task for 30 items and an analogous reading task, where phonological neighborhood density and word-frequency were manipulated. Results showed that both word properties influenced early stages of phonological activation (first fixation and first-pass duration) equally in both groups of participants. Moreover, in the control group any difficulty appeared to be resolved early in the reading process, while for dyslexic readers a processing disadvantage for low-frequency words and for words with sparse neighborhood also emerged in a measure that included late stages of output planning (eye-voice span). Thus, our findings suggest suboptimal phonetic and/or articulatory planning in dyslexia.
  • Eisner, F., Kumar, U., Mishra, R. K., Nand Tripathi, V., Guleria, A., Prakash Singh, J., & Huettig, F. (2016). Literacy acquisition drives hemispheric lateralization of reading. Talk presented at Architectures and Mechanisms for Language Processing (AMLaP 2016). Bilbao, Spain. 2016-09-01 - 2016-09-03.

    Abstract

    Reading functions beyond early visual precessing are known to be lateralized to the left hemisphere, but how left-lateralization arises during literacy acquisition is an open question. Bilateral processing or rightward asymmetries have previously been associated with developmental dyslexia. However, it is unclear at present to what extent this lack of left-lateralization reflects differences in reading ability. In this study, a group of illiterate adults in rural India (N=29) participated in a literacy training program over the course of six months. fMRI measures were obtained before and after training on a number of different visual stimulus categories, including written sentences, false fonts, and object categories such as houses and faces. This training group was matched on demographic and socioeconomic variables to an illiterate no-training group and to low- and highly-literate control groups, who were also scanned twice but received no training (total N=90). In a cross-sectional analysis before training, reading ability was positively correlated with increased BOLD responses in a left-lateralized network including the dorsal and ventral visual streams for text and false fonts, but not for other types of visual stimuli. A longitudinal analysis of learning effects in the training group showed that beginning readers engage bilateral networks more than proficient readers. Lateralization of BOLD responses was further examined by calculating laterality indices in specific regions. We observed training-related changes in lateralization for processing written stimuli in a number of subregions in the dorsal and ventral visual streams, as well as in the cerebellum. Together with the cross-sectional results, these data suggest a causal relationship between reading ability and the degree of hemispheric asymmetry in processing written materials.
  • Eisner, F., Kumar, U., Mishra, R. K., Nand Tripathi, V., Guleria, A., Prakash Singh, J., & Huettig, F. (2016). Literacy acquisition drives hemispheric lateralization of reading. Talk presented at the 31st International Congress of Psychology (ICP2016). Yokohoma, Japan. 2016-07-24 - 2016-07-29.

    Abstract

    Reading functions beyond early visual precessing are known to be lateralized to the left hemisphere, but how left-lateralization arises during literacy acquisition is an open question. Bilateral processing or rightward asymmetries have previously been associated with developmental dyslexia. However, it is unclear at present to what extent this lack of left-lateralization reflects differences in reading ability. In this study, a group of illiterate adults in rural India (N=29) participated in a literacy training program over the course of six months. fMRI measures were obtained before and after training on a number of different visual stimulus categories, including written sentences, false fonts, and object categories such as houses and faces. This training group was matched on demographic and socioeconomic variables to an illiterate no-training group and to low- and highly-literate control groups, who were also scanned twice but received no training (total N=90). In a cross-sectional analysis before training, reading ability was positively correlated with increased BOLD responses in a left-lateralized network including the dorsal and ventral visual streams for text and false fonts, but not for other types of visual stimuli. A longitudinal analysis of learning effects in the training group showed that beginning readers engage bilateral networks more than proficient readers. Lateralization of BOLD responses was further examined by calculating laterality indices in specific regions. We observed training-related changes in lateralization for processing written stimuli in a number of subregions in the dorsal and ventral visual streams, as well as in the cerebellum. Together with the cross-sectional results, these data suggest a causal relationship between reading ability and the degree of hemispheric asymmetry in processing written materials.
  • Huettig, F. (2016). Is prediction necessary to understand language?. Talk presented at the RefNet Round Table conference. Aberdeen, Scotland. 2016-01-15 - 2016-01-16.

    Abstract

    Many psycholinguistic experiments suggest that prediction is an important characteristic of language processing. Some recent theoretical accounts in the cognitive sciences (e.g., Clark, 2013; Friston, 2010) and psycholinguistics (e.g., Dell & Chang, 2014) appear to suggest that prediction is even necessary to understand language. I will evaluate this proposal. I will first discuss several arguments that may appear to be in line with the notion that prediction is necessary for language processing. These arguments include that prediction provides a unified theoretical principle of the human mind and that it pervades cortical function. We discuss whether evidence of human abilities to detect statistical regularities is necessarily evidence for predictive processing and evaluate suggestions that prediction is necessary for language learning. Five arguments are then presented that question the claim that all language processing is predictive in nature. I point out that not all language users appear to predict language and that suboptimal input makes prediction often very challenging. Prediction, moreover, is strongly context-dependent and impeded by resource limitations. I will also argue that it may be problematic that most experimental evidence for predictive language processing comes from 'prediction-encouraging' experimental set-ups. Finally, I will discuss possible ways that may lead to a further resolution of this debate. We conclude that languages can be learned and understood in the absence of prediction. Claims that all language processing is predictive in nature are premature.
  • Huettig, F. (2016). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the Psychology Department, University of Brussels. Brussels, Belgium. 2016-10.
  • Huettig, F., Kumar, U., Mishra, R. K., Tripathi, V., Guleria, A., Prakash Singh, J., & Eisner, F. (2016). The effect of learning to read on the neural systems for vision and language: A longitudinal approach with illiterate participants. Talk presented at the International meeting of the Psychonomic Society. Granada, Spain. 2016-05-05 - 2016-05-08.

    Abstract

    How do human cultural inventions such as reading result in neural re-organization? In this first longitudinal study with young completely illiterate adult participants, we measured brain responses to speech, text, and other categories of visual stimuli with fMRI before and after a group of illiterate participants in India completed a literacy training program in which they learned to read and write Devanagari script. A literate and an illiterate no-training control group were matched to the training group in terms of socioeconomic background and were recruited from the same societal community in two villages of a rural area near Lucknow, India. This design permitted investigating effects of literacy cross-sectionally across groups before training (N=86) as well as longitudinally (training group N=25). The two analysis approaches yielded converging results: Literacy was associated with enhanced, left-lateralized responses to written text along the ventral stream (including lingual gyrus, fusiform gyrus, and parahippocampal gyrus), dorsal stream (intraparietal sulcus), and (pre-) motor systems (pre-central sulcus, supplementary motor area) and thalamus (pulvinar). Significantly reduced responses were observed bilaterally in the superior parietal lobe (precuneus) and in the right angular gyrus. These effects corroborate and extend previous findings from cross-sectional studies. However, effects of literacy were specific to written text and (to a lesser extent) to false fonts. We did not find any evidence for effects of literacy on responses in the auditory cortex in our Hindi-speaking participants. This raises questions about the extent to which phonological representations are altered by literacy acquisition.
  • Ostarek, M., & Huettig, F. (2016). Sensory representations are causally involved in cognition but only when the task requires it. Talk presented at the 3rd Attentive Listener in the Visual World (AttLis) workshop. Potsdam, Germany. 2016-05-10 - 2016-05-11.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2016). The multimodal nature of spoken word processing in the visual world: Testing the predictions of alternative models of multimodal integration. Talk presented at the 15th Neural Computation and Psychology Workshop: Contemporary Neural Network Models (NCPW15). Philadelphia, PA, USA. 2016-08-08 - 2016-08-09.
  • Speed, L., Chen, J., Huettig, F., & Majid, A. (2016). Do classifier categories affect or reflect object concepts?. Talk presented at the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Philadelphia, PA, USA. 2016-08-10 - 2016-08-13.

    Abstract

    We conceptualize objects based on sensory and motor information gleaned from real-world experience. But to what extent is such conceptual information structured according to higher level linguistic features too? Here we investigate whether classifiers, a grammatical category, shape the conceptual representations of objects. In three experiments native Mandarin speakers (speakers of a classifier language) and native Dutch speakers (speakers of a language without classifiers) judged the similarity of a target object (presented as a word or picture) with four objects (presented as words or pictures). One object shared a classifier with the target, the other objects did not, serving as distractors. Across all experiments, participants judged the target object as more similar to the object with the shared classifier than distractor objects. This effect was seen in both Dutch and Mandarin speakers, and there was no difference between the two languages. Thus, even speakers of a non-classifier language are sensitive to object similarities underlying classifier systems, and using a classifier system does not exaggerate these similarities. This suggests that classifier systems simply reflect, rather than affect, conceptual structure.
  • Huettig, F., Singh, N., & Mishra, R. (2010). Language-mediated prediction is contingent upon formal literacy. Talk presented at Brain, Speech and Orthography Workshop. Brussels, Belgium. 2010-10-15 - 2010-10-16.

    Abstract

    A wealth of research has demonstrated that prediction is a core feature of human information processing. Much less is known, however, about the nature and the extent of predictive processing abilities. Here we investigated whether high levels of language expertise attained through formal literacy are related to anticipatory language-mediated visual orienting. Indian low and high literates listened to simple spoken sentences containing a target word (e.g., "door") while at the same time looking at a visual display of four objects (a target, i.e. the door, and three distractors). The spoken sentences were constructed to encourage anticipatory eye movements to visual target objects. High literates started to shift their eye gaze to the target object well before target word onset. In the low literacy group this shift of eye gaze occurred more than a second later, well after the onset of the target. Our findings suggest that formal literacy is crucial for the fine-tuning of language-mediated anticipatory mechanisms, abilities which proficient language users can then exploit for other cognitive activities such as language-mediated visual orienting.
  • Huettig, F. (2010). Looking, language, and memory. Talk presented at Language, Cognition, and Emotion Workshop. Delhi, India. 2010-12-06 - 2010-12-06.
  • Huettig, F. (2010). Toddlers’ language-mediated visual search: They need not have the words for it. Talk presented at International Conference on Cognitive Development 2010. Allahabad, India. 2010-12-10 - 2010-12-13.

    Abstract

    Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-month-olds lacking colour-term knowledge nonetheless recognised the perceptual-conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.
  • Huettig, F., & McQueen, J. M. (2009). AM radio noise changes the dynamics of spoken word recognition. Talk presented at 15th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2009). Barcelona, Spain. 2009-09-09.

    Abstract

    Language processing does not take place in isolation from the sensory environment. Listeners are able to recognise spoken words in many different situations, ranging from carefully articulated and noise-free laboratory speech, through casual conversational speech in a quiet room, to degraded conversational speech in a busy train-station. For listeners to be able to recognize speech optimally in each of these listening situations, they must be able to adapt to the constraints of each situation. We investigated this flexibility by comparing the dynamics of the spoken-word recognition process in clear speech and speech disrupted by radio noise. In Experiment 1, Dutch participants listened to clearly articulated spoken Dutch sentences which each included a critical word while their eye movements to four visual objects presented on a computer screen were measured. There were two critical conditions. In the first, the objects included a cohort competitor (e.g., parachute, “parachute”) with the same onset as the critical spoken word (e.g., paraplu, “umbrella”) and three unrelated distractors. In the second condition, a rhyme competitor (e.g., hamer, “hammer”) of the critical word (e.g., kamer, “room”) was present in the display, again with three distractors. To maximize competitor effects pictures of the critical words themselves were not present in the displays on the experimental trials (e.g.,there was no umbrella in the display with the 'paraplu' sentence) and a passive listening task was used (Huettig McQueen, 2007). Experiment 2 was identical to Experiment 1 except that phonemes in the spoken sentences were replaced with radio-signal noises (as in AM radio listening conditions). In each sentence, two,three or four phonemes were replaced with noises. The sentential position of these replacements was unpredictable, but the adjustments were always made to onset phonemes. The critical words (and the immediately surrounding words) were not changed. The question was whether listeners could learn that, under these circumstances, onset information is less reliable. We predicted that participants would look less at the cohort competitors (the initial match to the competitor is less good) and more at the rhyme competitors (the initial mismatch is less bad). We observed a significant experiment by competitor type interaction. In Experiment 1 participants fixated both kinds competitors more than unrelated distractors, but there were more and earlier looks to cohort competitors than to rhyme competitors (Allopenna et al., 1998). In Experiment 2 participants still fixated cohort competitors more than rhyme competitors but the early cohort effect was reduced and the rhyme effect was stronger and occurred earlier. These results suggest that AM radio noise changes the dynamics of spoken word recognition. The well-attested finding of stronger reliance on word onset overlap in speech recognition appears to be due in part to the use of clear speech in most experiments. When onset information becomes less reliable, listeners appear to depend on it less. A core feature of the speech-recognition system thus appears to be its flexibility. Listeners are able to adjust the perceptual weight they assign to different parts of incoming spoken language.
  • Huettig, F. (2009). Language-mediated visual search. Talk presented at Invited talk at VU Amsterdam. Amsterdam.
  • Huettig, F. (2009). On the use of distributional models of semantic space to investigate human cognition. Talk presented at Distributional Semantics beyond Concrete Concepts (Workshop at Annual Meeting of the Cognitive Science Society (CogSci 2009). Amsterdam, The Netherlands. 2009-07-29 - 2009-01-08.
  • Huettig, F. (2009). The role of colour during language-vision interactions. Talk presented at International Conference on Language-Cognition Interface 2009. Allahabad, India. 2009-12-06 - 2009-12-09.
  • Huettig, F., Chen, J., Bowerman, M., & Majid, A. (2008). Linguistic relativity: Evidence from Mandarin speakers’ eye-movements. Talk presented at 14th Annual Conference on the Architectures and Mechanisms for Language Processing [AMLaP 2008]. Cambridge, UK. 2008-09-04 - 2008-09-06.

    Abstract

    If a Mandarin speaker had walked past two rivers and wished to describe how many he had seen, he would have to say “two tiao river”, where tiao designates long, rope-like objects such as rivers, snakes and legs. Tiao is one of several hundred classifiers – a grammatical category in Mandarin. In two eye-tracking studies we presented Mandarin speakers with simple Mandarin sentences through headphones while monitoring their eye-movements to objects presented on a computer monitor. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence general conceptual processing then on hearing the target noun participants should look at objects that are also members of the same classifier category – even when the classifier is not explicitly present. For example, on hearing scissors, Mandarin speakers should look more at a picture of a chair than at an unrelated object because scissors and chair share the classifier ba. This would be consistent with a Strong Whorfian position, according to which language is a major determinant in shaping conceptual thought (Sapir, 1921; Whorf, 1956). A weaker influence of language-on-thought could be predicted, where language shapes cognitive processing, but only when the language-specific category is actively being processed (Slobin, 1996). According to this account, eye-movements are not necessarily drawn to chair when a participant hears scissors, but they would be on hearing ba scissors. This is because hearing ba activates the linguistic category that both scissors and chair belong to. A third logical possibility is that classifiers are purely formal markers (cf. Greenberg, 1972; Lehman, 1979) that do not influence attentional processing even when they are explicitly present. The data showed that when participants heard a spoken word from the same classifier category as a visually depicted object (e.g. scissors-chair), but the classifier was not explicitly presented in the speech, overt attention to classifier-match objects (e.g. chair) and distractor objects did not differ (Experiment 1). But when the classifier was explicitly presented (e.g. ba, Experiment 2), participants shifted overt attention significantly more to classifier-match objects (e.g. chair) than to distractors. These data are incompatible with the Strong Whorfian hypothesis. Instead the findings support the Weak Whorfian hypothesis that linguistic distinctions force attention to properties of the world but only during active linguistic processing of that distinction (cf. Slobin, 1996).

Share this page