Displaying 1 - 5 of 5
-
Ünal, E., Kırbaşoğlu, K., Karadöller, D. Z., Sumer, B., & Özyürek, A. (2025). Gesture reduces mapping difficulties in the development of spatial language depending on the complexity of spatial relations. Cognitive Science, 49(2): e70046. doi:10.1111/cogs.70046.
Abstract
In spoken languages, children acquire locative terms in a cross-linguistically stable order. Terms similar in meaning to in and on emerge earlier than those similar to front and behind, followed by left and right. This order has been attributed to the complexity of the relations expressed by different locative terms. An additional possibility is that children may be delayed in expressing certain spatial meanings partly due to difficulties in discovering the mappings between locative terms in speech and spatial relation they express. We investigate cognitive and mapping difficulties in the domain of spatial language by comparing how children map spatial meanings onto speech versus visually motivated forms in co-speech gesture across different spatial relations. Twenty-four 8-year-old and 23 adult native Turkish-speakers described four-picture displays where the target picture depicted in-on, front-behind, or left-right relations between objects. As the complexity of spatial relations increased, children were more likely to rely on gestures as opposed to speech to informatively express the spatial relation. Adults overwhelmingly relied on speech to informatively express the spatial relation, and this did not change across the complexity of spatial relations. Nevertheless, even when spatial expressions in both speech and co-speech gesture were considered, children lagged behind adults when expressing the most complex left-right relations. These findings suggest that cognitive development and mapping difficulties introduced by the modality of expressions interact in shaping the development of spatial language.Additional information
list of stimuli and descriptions -
Ter Bekke, M., Ozyurek, A., & Ünal, E. (2019). Speaking but not gesturing predicts motion event memory within and across languages. In A. Goel, C. Seifert, & C. Freksa (
Eds. ), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2940-2946). Montreal, QB: Cognitive Science Society.Abstract
In everyday life, people see, describe and remember motion events. We tested whether the type of motion event information (path or manner) encoded in speech and gesture predicts which information is remembered and if this varies across speakers of typologically different languages. We focus on intransitive motion events (e.g., a woman running to a tree) that are described differently in speech and co-speech gesture across languages, based on how these languages typologically encode manner and path information (Kita & Özyürek, 2003; Talmy, 1985). Speakers of Dutch (n = 19) and Turkish (n = 22) watched and described motion events. With a surprise (i.e. unexpected) recognition memory task, memory for manner and path components of these events was measured. Neither Dutch nor Turkish speakers’ memory for manner went above chance levels. However, we found a positive relation between path speech and path change detection: participants who described the path during encoding were more accurate at detecting changes to the path of an event during the memory task. In addition, the relation between path speech and path memory changed with native language: for Dutch speakers encoding path in speech was related to improved path memory, but for Turkish speakers no such relation existed. For both languages, co-speech gesture did not predict memory speakers. We discuss the implications of these findings for our understanding of the relations between speech, gesture, type of encoding in language and memory.Additional information
https://mindmodeling.org/cogsci2019/papers/0496/0496.pdf -
Ünal, E., & Papafragou, A. (2019). How children identify events from visual experience. Language Learning and Development, 15(2), 138-156. doi:10.1080/15475441.2018.1544075.
Abstract
Three experiments explored how well children recognize events from different types of visual experience: either by directly seeing an event or by indirectly experiencing it from post-event visual evidence. In Experiment 1, 4- and 5- to 6-year-old Turkish-speaking children (n = 32) successfully recognized events through either direct or indirect visual access. In Experiment 2, a new group of 4- and 5- to 6-year-olds (n = 37) reliably attributed event recognition to others who had direct or indirect visual access to events (even though performance was lower than Experiment 1). In both experiments, although children’s accu- racy improved with age, there was no difference between the two types of access. Experiment 3 replicated the findings from the youngest participants of Experiments 1 and 2 with a matched sample of English-speaking 4-year-olds (n = 37). Thus children can use different kinds of visual experience to support event representations in themselves and others. -
Ünal, E., & Papafragou, A. (2018). Evidentials, information sources and cognition. In A. Y. Aikhenvald (
Ed. ), The Oxford Handbook of Evidentiality (pp. 175-184). Oxford University Press. -
Ünal, E., & Papafragou, A. (2018). The relation between language and mental state reasoning. In J. Proust, & M. Fortier (
Eds. ), Metacognitive diversity: An interdisciplinary approach (pp. 153-169). Oxford: Oxford University Press.
Share this page