Yayun Zhang

Publications

Displaying 1 - 3 of 3
  • Sander, J., Zhang, Y., & Rowland, C. F. (2025). Language acquisition occurs in multimodal social interaction: A commentary on Karadöller, Sümer and Özyürek [invited commentary]. First Language: advance online publication. doi:10.1177/01427237251326984.

    Abstract

    We argue that language learning occurs in triadic interactions, where caregivers and children engage not only with each other but also with objects, actions and non-verbal cues that shape language acquisition. We illustrate this using two studies on real-time interactions in spoken and signed language. The first examines shared book reading, showing how caregivers use speech, gestures and gaze coordination to establish joint attention, facilitating word-object associations. The second study explores joint attention in spoken and signed interactions, demonstrating that signing dyads rely on a wider range of multimodal behaviours – such as touch, vibrations and peripheral gaze – compared to speaking dyads. Our data highlight how different language modalities shape attentional strategies. We advocate for research that fully incorporates the dynamic interplay between language, attention and environment.
  • Romberg, A., Zhang, Y., Newman, B., Triesch, J., & Yu, C. (2016). Global and local statistical regularities control visual attention to object sequences. In Proceedings of the 2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) (pp. 262-267).

    Abstract

    Many previous studies have shown that both infants and adults are skilled statistical learners. Because statistical learning is affected by attention, learners' ability to manage their attention can play a large role in what they learn. However, it is still unclear how learners allocate their attention in order to gain information in a visual environment containing multiple objects, especially how prior visual experience (i.e., familiarly of objects) influences where people look. To answer these questions, we collected eye movement data from adults exploring multiple novel objects while manipulating object familiarity with global (frequencies) and local (repetitions) regularities. We found that participants are sensitive to both global and local statistics embedded in their visual environment and they dynamically shift their attention to prioritize some objects over others as they gain knowledge of the objects and their distributions within the task.
  • Zhang, Y., & Yu, C. (2016). Examining referential uncertainty in naturalistic contexts from the child’s view: Evidence from an eye-tracking study with infants. In A. Papafragou, D. Grodner, D. Mirman, & J. Trueswell (Eds.), Proceedings of the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Austin, TX: Cognitive Science Society (pp. 2027-2032). Austin, TX: Cognitive Science Society.

    Abstract

    Young Infants are prolific word learners even though they are facing the challenge of referential uncertainty (Quine, 1960). Many laboratory studies have shown that infants are skilled at inferring correct referents of words from ambiguous contexts (Swingley, 2009). However, little is known regarding how they visually attend to and select the target object among many other objects in view when parents name it during everyday interactions. By investigating the looking pattern of 12-month-old infants using naturalistic first-person images with varying degrees of referential ambiguity, we found that infants’ attention is selective and they only select a small subset of objects to attend to at each learning instance despite the complexity of the data in the real world. This work allows us to better understand how perceptual properties of objects in infants’ view influence their visual attention, which is also related to how they select candidate objects to build word-object mappings.

Share this page