Ercenur Unal

Publications

Displaying 1 - 7 of 7
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2023). Late sign language exposure does not modulate the relation between spatial language and spatial memory in deaf children and adults. Memory & Cognition, 51, 582-600. doi:10.3758/s13421-022-01281-7.

    Abstract

    Prior work with hearing children acquiring a spoken language as their first language shows that spatial language and cognition are related systems and spatial language use predicts spatial memory. Here, we further investigate the extent of this relationship in signing deaf children and adults and ask if late sign language exposure, as well as the frequency and the type of spatial language use that might be affected by late exposure, modulate subsequent memory for spatial relations. To do so, we compared spatial language and memory of 8-year-old late-signing children (after 2 years of exposure to a sign language at the school for the deaf) and late-signing adults to their native-signing counterparts. We elicited picture descriptions of Left-Right relations in Turkish Sign Language (Türk İşaret Dili) and measured the subsequent recognition memory accuracy of the described pictures. Results showed that late-signing adults and children were similar to their native-signing counterparts in how often they encoded the spatial relation. However, late-signing adults but not children differed from their native-signing counterparts in the type of spatial language they used. However, neither late sign language exposure nor the frequency and type of spatial language use modulated spatial memory accuracy. Therefore, even though late language exposure seems to influence the type of spatial language use, this does not predict subsequent memory for spatial relations. We discuss the implications of these findings based on the theories concerning the correspondence between spatial language and cognition as related or rather independent systems.
  • Avcılar, G., & Ünal, E. (2022). Linguistic encoding of inferential evidence for events. In J. Culbertson, A. Perfors, H. Rabagliati, & V. Ramenzoni (Eds.), Proceedings of the 44th Annual Conference of the Cognitive Science Society (CogSci 2022) (pp. 2825-2830).

    Abstract

    How people learn about events often varies with some events perceived in their entirety and others are inferred based on the available evidence. Here, we investigate how children and adults linguistically encode the sources of their event knowledge. We focus on Turkish – a language that obligatorily encodes source of information for past events using two evidentiality markers. Children (4- to 5-year-olds and 6- to 7- year-olds) and adults watched and described events that they directly saw or inferred based on visual cues with manipulated degrees of indirectness. Overall, participants modified the evidential marking in their descriptions depending on (a) whether they saw or inferred the event and (b) the indirectness of the visual cues giving rise to an inference. There were no differences across age groups. These findings suggest that Turkish-speaking adults’ and children’s use of evidential markers are sensitive to the indirectness of the inferential evidence for events.
  • Ter Bekke, M., Özyürek, A., & Ünal, E. (2022). Speaking but not gesturing predicts event memory: A cross-linguistic comparison. Language and Cognition, 14(3), 362-384. doi:10.1017/langcog.2022.3.

    Abstract

    Every day people see, describe, and remember motion events. However, the relation between multimodal encoding of motion events in speech and gesture, and memory is not yet fully understood. Moreover, whether language typology modulates this relation remains to be tested. This study investigates whether the type of motion event information (path or manner) mentioned in speech and gesture predicts which information is remembered and whether this varies across speakers of typologically different languages. Dutch- and Turkish-speakers watched and described motion events and completed a surprise recognition memory task. For both Dutch- and Turkish-speakers, manner memory was at chance level. Participants who mentioned path in speech during encoding were more accurate at detecting changes to the path in the memory task. The relation between mentioning path in speech and path memory did not vary cross-linguistically. Finally, the co-speech gesture did not predict memory above mentioning path in speech. These findings suggest that how speakers describe a motion event in speech is more important than the typology of the speakers’ native language in predicting motion event memory. The motion event videos are available for download for future research at https://osf.io/p8cas/.

    Additional information

    S1866980822000035sup001.docx
  • Ünal, E., Manhardt, F., & Özyürek, A. (2022). Speaking and gesturing guide event perception during message conceptualization: Evidence from eye movements. Cognition, 225: 105127. doi:10.1016/j.cognition.2022.105127.

    Abstract

    Speakers’ visual attention to events is guided by linguistic conceptualization of information in spoken language
    production and in language-specific ways. Does production of language-specific co-speech gestures further guide
    speakers’ visual attention during message preparation? Here, we examine the link between visual attention and
    multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers’ speech and gesture
    show language specificity with path of motion mostly expressed within the main verb accompanied by path
    gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non-
    linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path
    over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative
    attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in
    the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not.
    Results strongly suggest that speakers’ visual attention is guided by language-specific event encoding not only in
    speech but also in gesture. This provides evidence consistent with models that propose integration of speech and
    gesture at the conceptualization level of language production and suggests that the links between the eye and the
    mouth may be extended to the eye and the hand.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Ozyurek, A. (2021). Spatial language use predicts spatial memory of children: Evidence from sign, speech, and speech-plus-gesture. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 672-678). Vienna: Cognitive Science Society.

    Abstract

    There is a strong relation between children’s exposure to
    spatial terms and their later memory accuracy. In the current
    study, we tested whether the production of spatial terms by
    children themselves predicts memory accuracy and whether
    and how language modality of these encodings modulates
    memory accuracy differently. Hearing child speakers of
    Turkish and deaf child signers of Turkish Sign Language
    described pictures of objects in various spatial relations to each
    other and later tested for their memory accuracy of these
    pictures in a surprise memory task. We found that having
    described the spatial relation between the objects predicted
    better memory accuracy. However, the modality of these
    descriptions in sign, speech, or speech-plus-gesture did not
    reveal differences in memory accuracy. We discuss the
    implications of these findings for the relation between spatial
    language, memory, and the modality of encoding.
  • Ünal, E., Ji, Y., & Papafragou, A. (2021). From event representation to linguistic meaning. Topics in Cognitive Science, 13(1), 224-242. doi:10.1111/tops.12475.

    Abstract

    A fundamental aspect of human cognition is the ability to parse our constantly unfolding expe- rience into meaningful representations of dynamic events and to communicate about these events with others. How do we communicate about events we have experienced? Influential theories of language production assume that the formulation and articulation of a linguistic message is pre- ceded by preverbal apprehension that captures core aspects of the event. Yet the nature of these preverbal event representations and the way they are mapped onto language are currently not well understood. Here, we review recent evidence on the link between event conceptualization and lan- guage, focusing on two core aspects of event representation: event roles and event boundaries. Empirical evidence in both domains shows that the cognitive representation of events aligns with the way these aspects of events are encoded in language, providing support for the presence of deep homologies between linguistic and cognitive event structure.
  • Ünal, E., Richards, C., Trueswell, J., & Papafragou, A. (2021). Representing agents, patients, goals and instruments in causative events: A cross-linguistic investigation of early language and cognition. Developmental Science, 24(6): e13116. doi:10.1111/desc.13116.

    Abstract


    Although it is widely assumed that the linguistic description of events is based on a structured representation of event components at the perceptual/conceptual level, lit- tle empirical work has tested this assumption directly. Here, we test the connection between language and perception/cognition cross-linguistically, focusing on the rela- tive salience of causative event components in language and cognition. We draw on evidence from preschoolers speaking English or Turkish. In a picture description task, Turkish-speaking 3-5-year-olds mentioned Agents less than their English-speaking peers (Turkish allows subject drop); furthermore, both language groups mentioned Patients more frequently than Goals, and Instruments less frequently than either Patients or Goals. In a change blindness task, both language groups were equally accurate at detecting changes to Agents (despite surface differences in Agent men- tions). The remaining components also behaved similarly: both language groups were less accurate in detecting changes to Instruments than either Patients or Goals (even though Turkish-speaking preschoolers were less accurate overall than their English- speaking peers). To our knowledge, this is the first study offering evidence for a strong—even though not strict—homology between linguistic and conceptual event roles in young learners cross-linguistically.

Share this page