Displaying 1 - 6 of 6
-
Ünal, E., Kırbaşoğlu, K., Karadöller, D. Z., Sumer, B., & Özyürek, A. (2025). Gesture reduces mapping difficulties in the development of spatial language depending on the complexity of spatial relations. Cognitive Science, 49(2): e70046. doi:10.1111/cogs.70046.
Abstract
In spoken languages, children acquire locative terms in a cross-linguistically stable order. Terms similar in meaning to in and on emerge earlier than those similar to front and behind, followed by left and right. This order has been attributed to the complexity of the relations expressed by different locative terms. An additional possibility is that children may be delayed in expressing certain spatial meanings partly due to difficulties in discovering the mappings between locative terms in speech and spatial relation they express. We investigate cognitive and mapping difficulties in the domain of spatial language by comparing how children map spatial meanings onto speech versus visually motivated forms in co-speech gesture across different spatial relations. Twenty-four 8-year-old and 23 adult native Turkish-speakers described four-picture displays where the target picture depicted in-on, front-behind, or left-right relations between objects. As the complexity of spatial relations increased, children were more likely to rely on gestures as opposed to speech to informatively express the spatial relation. Adults overwhelmingly relied on speech to informatively express the spatial relation, and this did not change across the complexity of spatial relations. Nevertheless, even when spatial expressions in both speech and co-speech gesture were considered, children lagged behind adults when expressing the most complex left-right relations. These findings suggest that cognitive development and mapping difficulties introduced by the modality of expressions interact in shaping the development of spatial language.Additional information
list of stimuli and descriptions -
Ter Bekke, M., Ozyurek, A., & Ünal, E. (2019). Speaking but not gesturing predicts motion event memory within and across languages. In A. Goel, C. Seifert, & C. Freksa (
Eds. ), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2940-2946). Montreal, QB: Cognitive Science Society.Abstract
In everyday life, people see, describe and remember motion events. We tested whether the type of motion event information (path or manner) encoded in speech and gesture predicts which information is remembered and if this varies across speakers of typologically different languages. We focus on intransitive motion events (e.g., a woman running to a tree) that are described differently in speech and co-speech gesture across languages, based on how these languages typologically encode manner and path information (Kita & Özyürek, 2003; Talmy, 1985). Speakers of Dutch (n = 19) and Turkish (n = 22) watched and described motion events. With a surprise (i.e. unexpected) recognition memory task, memory for manner and path components of these events was measured. Neither Dutch nor Turkish speakers’ memory for manner went above chance levels. However, we found a positive relation between path speech and path change detection: participants who described the path during encoding were more accurate at detecting changes to the path of an event during the memory task. In addition, the relation between path speech and path memory changed with native language: for Dutch speakers encoding path in speech was related to improved path memory, but for Turkish speakers no such relation existed. For both languages, co-speech gesture did not predict memory speakers. We discuss the implications of these findings for our understanding of the relations between speech, gesture, type of encoding in language and memory.Additional information
https://mindmodeling.org/cogsci2019/papers/0496/0496.pdf -
Ünal, E., & Papafragou, A. (2019). How children identify events from visual experience. Language Learning and Development, 15(2), 138-156. doi:10.1080/15475441.2018.1544075.
Abstract
Three experiments explored how well children recognize events from different types of visual experience: either by directly seeing an event or by indirectly experiencing it from post-event visual evidence. In Experiment 1, 4- and 5- to 6-year-old Turkish-speaking children (n = 32) successfully recognized events through either direct or indirect visual access. In Experiment 2, a new group of 4- and 5- to 6-year-olds (n = 37) reliably attributed event recognition to others who had direct or indirect visual access to events (even though performance was lower than Experiment 1). In both experiments, although children’s accu- racy improved with age, there was no difference between the two types of access. Experiment 3 replicated the findings from the youngest participants of Experiments 1 and 2 with a matched sample of English-speaking 4-year-olds (n = 37). Thus children can use different kinds of visual experience to support event representations in themselves and others. -
Ünal, E., Pinto, A., Bunger, A., & Papafragou, A. (2016). Monitoring sources of event memories: A cross-linguistic investigation. Journal of Memory and Language, 87, 157-176. doi:10.1016/j.jml.2015.10.009.
Abstract
When monitoring the origins of their memories, people tend to mistakenly attribute mem- ories generated from internal processes (e.g., imagination, visualization) to perception. Here, we ask whether speaking a language that obligatorily encodes the source of informa- tion might help prevent such errors. We compare speakers of English to speakers of Turkish, a language that obligatorily encodes information source (direct/perceptual vs. indirect/hearsay or inference) for past events. In our experiments, participants reported having seen events that they had only inferred from post-event visual evidence. In general, error rates were higher when visual evidence that gave rise to inferences was relatively close to direct visual evidence. Furthermore, errors persisted even when participants were asked to report the specific sources of their memories. Crucially, these error patterns were equivalent across language groups, suggesting that speaking a language that obligatorily encodes source of information does not increase sensitivity to the distinction between per- ception and inference in event memory. -
Ünal, E., & Papafragou, A. (2016). Interactions between language and mental representations. Language Learning, 66(3), 554-580. doi:10.1111/lang.12188.
Abstract
It has long been recognized that language interacts with visual and spatial processes. However, the nature and extent of these interactions are widely debated. The goal of this article is to review empirical findings across several domains to understand whether language affects the way speakers conceptualize the world even when they are not speaking or understanding speech. A second goal of the present review is to shed light on the mechanisms through which effects of language are transmitted. Across domains, there is growing support for the idea that although language does not lead to long-lasting changes in mental representations, it exerts powerful influences during momentary mental computations by either modulating attention or augmenting representational power -
Ünal, E., & Papafragou, A. (2016). Production--comprehension asymmetries and the acquisition of evidential morphology. Journal of Memory and Language, 89, 179-199. doi:10.1016/j.jml.2015.12.001.
Abstract
Although children typically comprehend the links between specific forms and their mean- ings before they produce the forms themselves, the opposite pattern also occurs. The nat- ure of these ‘reverse asymmetries’ between production and comprehension remains debated. Here we focus on a striking case where production precedes comprehension in the acquisition of Turkish evidential morphology and explore theoretical explanations of this asymmetry. We show that 3- to 6-year-old Turkish learners produce evidential mor- phemes accurately (Experiment 1) but have difficulty with evidential comprehension (Experiment 2). Furthermore, comprehension failures persist across multiple tasks (Experiments 3–4). We suggest that evidential comprehension is delayed by the develop- ment of mental perspective-taking abilities needed to compute others’ knowledge sources. In support for this hypothesis, we find that children have difficulty reasoning about others’ evidence in non-linguistic tasks but the difficulty disappears when the tasks involve accessing one’s own evidential sources (Experiment 5)
Share this page