Ercenur Unal

Publications

Displaying 1 - 14 of 14
  • Ünal, E., Kırbaşoğlu, K., Karadöller, D. Z., Sumer, B., & Özyürek, A. (2025). Gesture reduces mapping difficulties in the development of spatial language depending on the complexity of spatial relations. Cognitive Science, 49(2): e70046. doi:10.1111/cogs.70046.

    Abstract

    In spoken languages, children acquire locative terms in a cross-linguistically stable order. Terms similar in meaning to in and on emerge earlier than those similar to front and behind, followed by left and right. This order has been attributed to the complexity of the relations expressed by different locative terms. An additional possibility is that children may be delayed in expressing certain spatial meanings partly due to difficulties in discovering the mappings between locative terms in speech and spatial relation they express. We investigate cognitive and mapping difficulties in the domain of spatial language by comparing how children map spatial meanings onto speech versus visually motivated forms in co-speech gesture across different spatial relations. Twenty-four 8-year-old and 23 adult native Turkish-speakers described four-picture displays where the target picture depicted in-on, front-behind, or left-right relations between objects. As the complexity of spatial relations increased, children were more likely to rely on gestures as opposed to speech to informatively express the spatial relation. Adults overwhelmingly relied on speech to informatively express the spatial relation, and this did not change across the complexity of spatial relations. Nevertheless, even when spatial expressions in both speech and co-speech gesture were considered, children lagged behind adults when expressing the most complex left-right relations. These findings suggest that cognitive development and mapping difficulties introduced by the modality of expressions interact in shaping the development of spatial language.

    Additional information

    list of stimuli and descriptions
  • Karadöller, D. Z., Sümer, B., Ünal, E., & Özyürek, A. (2024). Sign advantage: Both children and adults’ spatial expressions in sign are more informative than those in speech and gestures combined. Journal of Child Language, 51(4), 876-902. doi:10.1017/S0305000922000642.

    Abstract

    Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.
  • Ünal, E., Wilson, F., Trueswell, J., & Papafragou, A. (2024). Asymmetries in encoding event roles: Evidence from language and cognition. Cognition, 250: 105868. doi:10.1016/j.cognition.2024.105868.

    Abstract

    It has long been hypothesized that the linguistic structure of events, including event participants and their relative prominence, draws on the non-linguistic nature of events and the roles that these events license. However, the precise relation between the prominence of event participants in language and cognition has not been tested experimentally in a systematic way. Here we address this gap. In four experiments, we investigate the relative prominence of (animate) Agents, Patients, Goals and Instruments in the linguistic encoding of complex events and the prominence of these event roles in cognition as measured by visual search and change blindness tasks. The relative prominence of these event roles was largely similar—though not identical—across linguistic and non-linguistic measures. Across linguistic and non-linguistic tasks, Patients were more salient than Goals, which were more salient than Instruments. (Animate) Agents were more salient than Patients in linguistic descriptions and visual search; however, this asymmetrical pattern did not emerge in change detection. Overall, our results reveal homologies between the linguistic and non-linguistic prominence of individual event participants, thereby lending support to the claim that the linguistic structure of events builds on underlying conceptual event representations. We discuss implications of these findings for linguistic theory and theories of event cognition.
  • Ünal, E., Mamus, E., & Özyürek, A. (2024). Multimodal encoding of motion events in speech, gesture, and cognition. Language and Cognition, 16(4), 785-804. doi:10.1017/langcog.2023.61.

    Abstract

    How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy’s typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2023). Late sign language exposure does not modulate the relation between spatial language and spatial memory in deaf children and adults. Memory & Cognition, 51, 582-600. doi:10.3758/s13421-022-01281-7.

    Abstract

    Prior work with hearing children acquiring a spoken language as their first language shows that spatial language and cognition are related systems and spatial language use predicts spatial memory. Here, we further investigate the extent of this relationship in signing deaf children and adults and ask if late sign language exposure, as well as the frequency and the type of spatial language use that might be affected by late exposure, modulate subsequent memory for spatial relations. To do so, we compared spatial language and memory of 8-year-old late-signing children (after 2 years of exposure to a sign language at the school for the deaf) and late-signing adults to their native-signing counterparts. We elicited picture descriptions of Left-Right relations in Turkish Sign Language (Türk İşaret Dili) and measured the subsequent recognition memory accuracy of the described pictures. Results showed that late-signing adults and children were similar to their native-signing counterparts in how often they encoded the spatial relation. However, late-signing adults but not children differed from their native-signing counterparts in the type of spatial language they used. However, neither late sign language exposure nor the frequency and type of spatial language use modulated spatial memory accuracy. Therefore, even though late language exposure seems to influence the type of spatial language use, this does not predict subsequent memory for spatial relations. We discuss the implications of these findings based on the theories concerning the correspondence between spatial language and cognition as related or rather independent systems.
  • Ter Bekke, M., Özyürek, A., & Ünal, E. (2022). Speaking but not gesturing predicts event memory: A cross-linguistic comparison. Language and Cognition, 14(3), 362-384. doi:10.1017/langcog.2022.3.

    Abstract

    Every day people see, describe, and remember motion events. However, the relation between multimodal encoding of motion events in speech and gesture, and memory is not yet fully understood. Moreover, whether language typology modulates this relation remains to be tested. This study investigates whether the type of motion event information (path or manner) mentioned in speech and gesture predicts which information is remembered and whether this varies across speakers of typologically different languages. Dutch- and Turkish-speakers watched and described motion events and completed a surprise recognition memory task. For both Dutch- and Turkish-speakers, manner memory was at chance level. Participants who mentioned path in speech during encoding were more accurate at detecting changes to the path in the memory task. The relation between mentioning path in speech and path memory did not vary cross-linguistically. Finally, the co-speech gesture did not predict memory above mentioning path in speech. These findings suggest that how speakers describe a motion event in speech is more important than the typology of the speakers’ native language in predicting motion event memory. The motion event videos are available for download for future research at https://osf.io/p8cas/.

    Additional information

    S1866980822000035sup001.docx
  • Ünal, E., Manhardt, F., & Özyürek, A. (2022). Speaking and gesturing guide event perception during message conceptualization: Evidence from eye movements. Cognition, 225: 105127. doi:10.1016/j.cognition.2022.105127.

    Abstract

    Speakers’ visual attention to events is guided by linguistic conceptualization of information in spoken language
    production and in language-specific ways. Does production of language-specific co-speech gestures further guide
    speakers’ visual attention during message preparation? Here, we examine the link between visual attention and
    multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers’ speech and gesture
    show language specificity with path of motion mostly expressed within the main verb accompanied by path
    gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non-
    linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path
    over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative
    attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in
    the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not.
    Results strongly suggest that speakers’ visual attention is guided by language-specific event encoding not only in
    speech but also in gesture. This provides evidence consistent with models that propose integration of speech and
    gesture at the conceptualization level of language production and suggests that the links between the eye and the
    mouth may be extended to the eye and the hand.
  • Ünal, E., Ji, Y., & Papafragou, A. (2021). From event representation to linguistic meaning. Topics in Cognitive Science, 13(1), 224-242. doi:10.1111/tops.12475.

    Abstract

    A fundamental aspect of human cognition is the ability to parse our constantly unfolding expe- rience into meaningful representations of dynamic events and to communicate about these events with others. How do we communicate about events we have experienced? Influential theories of language production assume that the formulation and articulation of a linguistic message is pre- ceded by preverbal apprehension that captures core aspects of the event. Yet the nature of these preverbal event representations and the way they are mapped onto language are currently not well understood. Here, we review recent evidence on the link between event conceptualization and lan- guage, focusing on two core aspects of event representation: event roles and event boundaries. Empirical evidence in both domains shows that the cognitive representation of events aligns with the way these aspects of events are encoded in language, providing support for the presence of deep homologies between linguistic and cognitive event structure.
  • Ünal, E., Richards, C., Trueswell, J., & Papafragou, A. (2021). Representing agents, patients, goals and instruments in causative events: A cross-linguistic investigation of early language and cognition. Developmental Science, 24(6): e13116. doi:10.1111/desc.13116.

    Abstract


    Although it is widely assumed that the linguistic description of events is based on a structured representation of event components at the perceptual/conceptual level, lit- tle empirical work has tested this assumption directly. Here, we test the connection between language and perception/cognition cross-linguistically, focusing on the rela- tive salience of causative event components in language and cognition. We draw on evidence from preschoolers speaking English or Turkish. In a picture description task, Turkish-speaking 3-5-year-olds mentioned Agents less than their English-speaking peers (Turkish allows subject drop); furthermore, both language groups mentioned Patients more frequently than Goals, and Instruments less frequently than either Patients or Goals. In a change blindness task, both language groups were equally accurate at detecting changes to Agents (despite surface differences in Agent men- tions). The remaining components also behaved similarly: both language groups were less accurate in detecting changes to Instruments than either Patients or Goals (even though Turkish-speaking preschoolers were less accurate overall than their English- speaking peers). To our knowledge, this is the first study offering evidence for a strong—even though not strict—homology between linguistic and conceptual event roles in young learners cross-linguistically.
  • Ünal, E., & Papafragou, A. (2020). Relations between language and cognition: Evidentiality and sources of knowledge. Topics in Cognitive Science, 12(1), 115-135. doi:10.1111/tops.12355.

    Abstract

    Understanding and acquiring language involve mapping language onto conceptual representations. Nevertheless, several issues remain unresolved with respect to (a) how such mappings are performed, and (b) whether conceptual representations are susceptible to cross‐linguistic influences. In this article, we discuss these issues focusing on the domain of evidentiality and sources of knowledge. Empirical evidence in this domain yields growing support for the proposal that linguistic categories of evidentiality are tightly linked to, build on, and reflect conceptual representations of sources of knowledge that are shared across speakers of different languages.
  • Ünal, E., & Papafragou, A. (2019). How children identify events from visual experience. Language Learning and Development, 15(2), 138-156. doi:10.1080/15475441.2018.1544075.

    Abstract

    Three experiments explored how well children recognize events from different types of visual experience: either by directly seeing an event or by indirectly experiencing it from post-event visual evidence. In Experiment 1, 4- and 5- to 6-year-old Turkish-speaking children (n = 32) successfully recognized events through either direct or indirect visual access. In Experiment 2, a new group of 4- and 5- to 6-year-olds (n = 37) reliably attributed event recognition to others who had direct or indirect visual access to events (even though performance was lower than Experiment 1). In both experiments, although children’s accu- racy improved with age, there was no difference between the two types of access. Experiment 3 replicated the findings from the youngest participants of Experiments 1 and 2 with a matched sample of English-speaking 4-year-olds (n = 37). Thus children can use different kinds of visual experience to support event representations in themselves and others.
  • Ünal, E., Pinto, A., Bunger, A., & Papafragou, A. (2016). Monitoring sources of event memories: A cross-linguistic investigation. Journal of Memory and Language, 87, 157-176. doi:10.1016/j.jml.2015.10.009.

    Abstract

    When monitoring the origins of their memories, people tend to mistakenly attribute mem- ories generated from internal processes (e.g., imagination, visualization) to perception. Here, we ask whether speaking a language that obligatorily encodes the source of informa- tion might help prevent such errors. We compare speakers of English to speakers of Turkish, a language that obligatorily encodes information source (direct/perceptual vs. indirect/hearsay or inference) for past events. In our experiments, participants reported having seen events that they had only inferred from post-event visual evidence. In general, error rates were higher when visual evidence that gave rise to inferences was relatively close to direct visual evidence. Furthermore, errors persisted even when participants were asked to report the specific sources of their memories. Crucially, these error patterns were equivalent across language groups, suggesting that speaking a language that obligatorily encodes source of information does not increase sensitivity to the distinction between per- ception and inference in event memory.
  • Ünal, E., & Papafragou, A. (2016). Interactions between language and mental representations. Language Learning, 66(3), 554-580. doi:10.1111/lang.12188.

    Abstract

    It has long been recognized that language interacts with visual and spatial processes. However, the nature and extent of these interactions are widely debated. The goal of this article is to review empirical findings across several domains to understand whether language affects the way speakers conceptualize the world even when they are not speaking or understanding speech. A second goal of the present review is to shed light on the mechanisms through which effects of language are transmitted. Across domains, there is growing support for the idea that although language does not lead to long-lasting changes in mental representations, it exerts powerful influences during momentary mental computations by either modulating attention or augmenting representational power
  • Ünal, E., & Papafragou, A. (2016). Production--comprehension asymmetries and the acquisition of evidential morphology. Journal of Memory and Language, 89, 179-199. doi:10.1016/j.jml.2015.12.001.

    Abstract

    Although children typically comprehend the links between specific forms and their mean- ings before they produce the forms themselves, the opposite pattern also occurs. The nat- ure of these ‘reverse asymmetries’ between production and comprehension remains debated. Here we focus on a striking case where production precedes comprehension in the acquisition of Turkish evidential morphology and explore theoretical explanations of this asymmetry. We show that 3- to 6-year-old Turkish learners produce evidential mor- phemes accurately (Experiment 1) but have difficulty with evidential comprehension (Experiment 2). Furthermore, comprehension failures persist across multiple tasks (Experiments 3–4). We suggest that evidential comprehension is delayed by the develop- ment of mental perspective-taking abilities needed to compute others’ knowledge sources. In support for this hypothesis, we find that children have difficulty reasoning about others’ evidence in non-linguistic tasks but the difficulty disappears when the tasks involve accessing one’s own evidential sources (Experiment 5)

Share this page