Falk Huettig

Presentations

Displaying 1 - 11 of 11
  • Hintz, F., & Huettig, F. (2012). Phonological word-object mapping is contingent upon the nature of the visual environment. Talk presented at Psycholinguistics in Flanders goes Dutch [PiF 2012]. Berg en Dal (NL). 2012-06-06 - 2012-06-07.
  • Huettig, F., Singh, N., Singh, S., & Mishra, R. K. (2012). Language-mediated prediction is related to reading ability and formal literacy. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
  • Huettig, F. (2012). Literacy modulates language-mediated visual attention and prediction. Talk presented at the Center of Excellence Cognitive Interaction Technology (CITEC). Bielefeld, Germany. 2012-01-12.
  • Huettig, F. (2012). The nature and mechanisms of language-mediated anticipatory eye movements. Talk presented at the International symposium: The Attentive Listener in the Visual world: The Interaction of Language, Attention,Memory, and Vision. Allahabad, India. 2012-10-05 - 2012-10-06.
  • Mani, N., & Huettig, F. (2012). Toddlers anticipate that we EAT cake. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2012). Object shape representations in the contents of predictions for upcoming words. Talk presented at Psycholinguistics in Flanders [PiF 2012]. Berg en Dal, The Netherlands. 2012-06-06 - 2012-06-07.
  • Rommers, J., Meyer, A. S., Praamstra, P., & Huettig, F. (2012). The content of predictions: Involvement of object shape representations in the anticipation of upcoming words. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2012]. Mannheim, Germany. 2012-04-04 - 2012-04-06.
  • Rommers, J., Meyer, A. S., & Huettig, F. (2012). Predicting upcoming meaning involves specific contents and domain-general mechanisms. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012]. Riva del Garda, Italy. 2012-09-06 - 2012-09-08.

    Abstract

    In sentence comprehension, readers and listeners often anticipate upcoming information (e.g., Altmann & Kamide, 1999). We investigated two aspects of this process, namely 1) what is pre-activated when anticipating an upcoming word (the contents of predictions), and 2) which cognitive mechanisms are involved. The contents of predictions at the level of meaning could be restricted to functional semantic attributes (e.g., edibility; Altmann & Kamide, 1999). However, when words are processed other types of information can also be activated, such as object shape representations. It is unknown whether this type of information is already activated when upcoming words are predicted. Forty-five adult participants listened to predictable words in sentence contexts (e.g., "In 1969 Neil Armstrong was the first man to set foot on the moon.") while looking at visual displays of four objects. Their eye movements were recorded. There were three conditions: target present (e.g., a moon and three distractor objects that were unrelated to the predictable word in terms of semantics, shape, and phonology), shape competitor (e.g., a tomato and three unrelated distractors), and distractors only (e.g., rice and three other unrelated objects). Across lists, the same pictures and sentences were used in the different conditions. We found that participants already showed a significant bias for the target object (moon) over unrelated distractors several seconds before the target was mentioned, demonstrating that they were predicting. Importantly, there was also a smaller but significant shape competitor (tomato) preference starting at about a second before critical word onset, consistent with predictions involving the referent’s shape. The mechanisms of predictions could be specific to language tasks, or language could use processing principles that are also used in other domains of cognition. We investigated whether performance in non-linguistic prediction is related to prediction in language processing, taking an individual differences approach. In addition to the language processing task, the participants performed a simple cueing task (after Posner, Nissen, & Ogden, 1978). They pressed one of two buttons (left/right) to indicate the location of an X symbol on the screen. On half of the trials, the X was preceded by a neutral cue (+). On the other half, an arrow cue pointing left (<) or right (>) indicated the upcoming X's location with 80% validity (i.e., the arrow cue was correct 80% of the time). The SOA between cue and target was 500 ms. Prediction was quantified as the mean response latency difference between the neutral and valid condition. This measure correlated positively with individual participants' anticipatory target and shape competitor preference (r = .27; r = .45), and was a significant predictor of anticipatory looks in linear mixed-effects regression models of the data. Participants who showed more facilitation from the arrow cues predicted to a higher degree in the linguistic task. This suggests that prediction in language processing may use mechanisms that are also used in other domains of cognition. References Altmann, G. T. M., & Kamide, Y. (1999). Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition, 73(3), 247-264. Posner, M. I., Nissen, M. J., & Ogden, W. C. (1978). Attended and unattended processing modes: The role of set for spatial location. In: H.L. Pick, & I.J. Saltzman (Eds.), Modes of perceiving and processing information. Hillsdale, N.J.: Lawrence Erlbaum Associates.
  • Smith, A. C., Huettig, F., & Monaghan, P. (2012). Modelling multimodal interaction in language mediated eye gaze. Talk presented at the 13th Neural Computation and Psychology Workshop [NCPW13]. San Sebastian, Spain. 2012-07-12 - 2012-07-14.

    Abstract

    Hub-and-spoke models of semantic processing which integrate modality specific information within a central resource have proven successful in capturing a range of neuropsychological phenomena (Rogers et al, 2004; Dilkina et al, 2008). Within our study we investigate whether the scope of the Hub-and-spoke architectural framework can be extended to capture behavioural phenomena in other areas of cognition. The visual world paradigm (VWP) has contributed significantly to our understanding of the information and processes involved in spoken word recognition. In particular it has highlighted the importance of non-linguistic influences during language processing, indicating that combined information from vision, phonology, and semantics is evident in performance on such tasks (see Huettig, Rommers & Meyer, 2011). Huettig & McQueen (2007) demonstrated that participants’ fixations to objects presented within a single visual display varied systematically according to their phonological, semantic and visual relationship to a spoken target word. The authors argue that only an explanation allowing for influence from all three knowledge types is capable of accounting for the observed behaviour. To date computational models of the VWP (Allopenna et al, 1998; Mayberry et al, 2009; Kukona et al, 2011) have focused largely on linguistic aspects of the task and have therefore been unable to offer explanations for the growing body of experimental evidence emphasising the influence of non-linguistic information on spoken word recognition. Our study demonstrates that an emergent connectionist model, based on the Hub-and-spoke models of semantic processing, which integrates visual, phonological and functional information within a central resource, is able to capture the intricate time course dynamics of eye fixation behaviour reported in Huettig & McQueen (2007). Our findings indicate that such language mediated visual attention phenomena can emerge largely due to the statistics of the problem domain and may not require additional domain specific processing constraints.
  • Smith, A. C., Huettig, F., & Monaghan, P. (2012). The Tug of War during spoken word recognition in our visual worlds. Talk presented at Psycholinguistics in Flanders 2012 [[PiF 2012]. Berg en Dal, NL. 2012-06-06 - 2012-06-07.
  • Huettig, F., Chen, J., Bowerman, M., & Majid, A. (2008). Linguistic relativity: Evidence from Mandarin speakers’ eye-movements. Talk presented at 14th Annual Conference on the Architectures and Mechanisms for Language Processing [AMLaP 2008]. Cambridge, UK. 2008-09-04 - 2008-09-06.

    Abstract

    If a Mandarin speaker had walked past two rivers and wished to describe how many he had seen, he would have to say “two tiao river”, where tiao designates long, rope-like objects such as rivers, snakes and legs. Tiao is one of several hundred classifiers – a grammatical category in Mandarin. In two eye-tracking studies we presented Mandarin speakers with simple Mandarin sentences through headphones while monitoring their eye-movements to objects presented on a computer monitor. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence general conceptual processing then on hearing the target noun participants should look at objects that are also members of the same classifier category – even when the classifier is not explicitly present. For example, on hearing scissors, Mandarin speakers should look more at a picture of a chair than at an unrelated object because scissors and chair share the classifier ba. This would be consistent with a Strong Whorfian position, according to which language is a major determinant in shaping conceptual thought (Sapir, 1921; Whorf, 1956). A weaker influence of language-on-thought could be predicted, where language shapes cognitive processing, but only when the language-specific category is actively being processed (Slobin, 1996). According to this account, eye-movements are not necessarily drawn to chair when a participant hears scissors, but they would be on hearing ba scissors. This is because hearing ba activates the linguistic category that both scissors and chair belong to. A third logical possibility is that classifiers are purely formal markers (cf. Greenberg, 1972; Lehman, 1979) that do not influence attentional processing even when they are explicitly present. The data showed that when participants heard a spoken word from the same classifier category as a visually depicted object (e.g. scissors-chair), but the classifier was not explicitly presented in the speech, overt attention to classifier-match objects (e.g. chair) and distractor objects did not differ (Experiment 1). But when the classifier was explicitly presented (e.g. ba, Experiment 2), participants shifted overt attention significantly more to classifier-match objects (e.g. chair) than to distractors. These data are incompatible with the Strong Whorfian hypothesis. Instead the findings support the Weak Whorfian hypothesis that linguistic distinctions force attention to properties of the world but only during active linguistic processing of that distinction (cf. Slobin, 1996).

Share this page