Ercenur Unal

Publications

Displaying 1 - 6 of 6
  • Avcılar, G., & Ünal, E. (2022). Linguistic encoding of inferential evidence for events. In J. Culbertson, A. Perfors, H. Rabagliati, & V. Ramenzoni (Eds.), Proceedings of the 44th Annual Conference of the Cognitive Science Society (CogSci 2022) (pp. 2825-2830).

    Abstract

    How people learn about events often varies with some events perceived in their entirety and others are inferred based on the available evidence. Here, we investigate how children and adults linguistically encode the sources of their event knowledge. We focus on Turkish – a language that obligatorily encodes source of information for past events using two evidentiality markers. Children (4- to 5-year-olds and 6- to 7- year-olds) and adults watched and described events that they directly saw or inferred based on visual cues with manipulated degrees of indirectness. Overall, participants modified the evidential marking in their descriptions depending on (a) whether they saw or inferred the event and (b) the indirectness of the visual cues giving rise to an inference. There were no differences across age groups. These findings suggest that Turkish-speaking adults’ and children’s use of evidential markers are sensitive to the indirectness of the inferential evidence for events.
  • Ter Bekke, M., Özyürek, A., & Ünal, E. (2022). Speaking but not gesturing predicts event memory: A cross-linguistic comparison. Language and Cognition, 14(3), 362-384. doi:10.1017/langcog.2022.3.

    Abstract

    Every day people see, describe, and remember motion events. However, the relation between multimodal encoding of motion events in speech and gesture, and memory is not yet fully understood. Moreover, whether language typology modulates this relation remains to be tested. This study investigates whether the type of motion event information (path or manner) mentioned in speech and gesture predicts which information is remembered and whether this varies across speakers of typologically different languages. Dutch- and Turkish-speakers watched and described motion events and completed a surprise recognition memory task. For both Dutch- and Turkish-speakers, manner memory was at chance level. Participants who mentioned path in speech during encoding were more accurate at detecting changes to the path in the memory task. The relation between mentioning path in speech and path memory did not vary cross-linguistically. Finally, the co-speech gesture did not predict memory above mentioning path in speech. These findings suggest that how speakers describe a motion event in speech is more important than the typology of the speakers’ native language in predicting motion event memory. The motion event videos are available for download for future research at https://osf.io/p8cas/.

    Additional information

    S1866980822000035sup001.docx
  • Ünal, E., Manhardt, F., & Özyürek, A. (2022). Speaking and gesturing guide event perception during message conceptualization: Evidence from eye movements. Cognition, 225: 105127. doi:10.1016/j.cognition.2022.105127.

    Abstract

    Speakers’ visual attention to events is guided by linguistic conceptualization of information in spoken language
    production and in language-specific ways. Does production of language-specific co-speech gestures further guide
    speakers’ visual attention during message preparation? Here, we examine the link between visual attention and
    multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers’ speech and gesture
    show language specificity with path of motion mostly expressed within the main verb accompanied by path
    gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non-
    linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path
    over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative
    attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in
    the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not.
    Results strongly suggest that speakers’ visual attention is guided by language-specific event encoding not only in
    speech but also in gesture. This provides evidence consistent with models that propose integration of speech and
    gesture at the conceptualization level of language production and suggests that the links between the eye and the
    mouth may be extended to the eye and the hand.
  • Ünal, E., Pinto, A., Bunger, A., & Papafragou, A. (2016). Monitoring sources of event memories: A cross-linguistic investigation. Journal of Memory and Language, 87, 157-176. doi:10.1016/j.jml.2015.10.009.

    Abstract

    When monitoring the origins of their memories, people tend to mistakenly attribute mem- ories generated from internal processes (e.g., imagination, visualization) to perception. Here, we ask whether speaking a language that obligatorily encodes the source of informa- tion might help prevent such errors. We compare speakers of English to speakers of Turkish, a language that obligatorily encodes information source (direct/perceptual vs. indirect/hearsay or inference) for past events. In our experiments, participants reported having seen events that they had only inferred from post-event visual evidence. In general, error rates were higher when visual evidence that gave rise to inferences was relatively close to direct visual evidence. Furthermore, errors persisted even when participants were asked to report the specific sources of their memories. Crucially, these error patterns were equivalent across language groups, suggesting that speaking a language that obligatorily encodes source of information does not increase sensitivity to the distinction between per- ception and inference in event memory.
  • Ünal, E., & Papafragou, A. (2016). Interactions between language and mental representations. Language Learning, 66(3), 554-580. doi:10.1111/lang.12188.

    Abstract

    It has long been recognized that language interacts with visual and spatial processes. However, the nature and extent of these interactions are widely debated. The goal of this article is to review empirical findings across several domains to understand whether language affects the way speakers conceptualize the world even when they are not speaking or understanding speech. A second goal of the present review is to shed light on the mechanisms through which effects of language are transmitted. Across domains, there is growing support for the idea that although language does not lead to long-lasting changes in mental representations, it exerts powerful influences during momentary mental computations by either modulating attention or augmenting representational power
  • Ünal, E., & Papafragou, A. (2016). Production--comprehension asymmetries and the acquisition of evidential morphology. Journal of Memory and Language, 89, 179-199. doi:10.1016/j.jml.2015.12.001.

    Abstract

    Although children typically comprehend the links between specific forms and their mean- ings before they produce the forms themselves, the opposite pattern also occurs. The nat- ure of these ‘reverse asymmetries’ between production and comprehension remains debated. Here we focus on a striking case where production precedes comprehension in the acquisition of Turkish evidential morphology and explore theoretical explanations of this asymmetry. We show that 3- to 6-year-old Turkish learners produce evidential mor- phemes accurately (Experiment 1) but have difficulty with evidential comprehension (Experiment 2). Furthermore, comprehension failures persist across multiple tasks (Experiments 3–4). We suggest that evidential comprehension is delayed by the develop- ment of mental perspective-taking abilities needed to compute others’ knowledge sources. In support for this hypothesis, we find that children have difficulty reasoning about others’ evidence in non-linguistic tasks but the difficulty disappears when the tasks involve accessing one’s own evidential sources (Experiment 5)

Share this page