Ercenur Unal

Publications

Displaying 1 - 9 of 9
  • Karadöller, D. Z., Sümer, B., Ünal, E., & Özyürek, A. (2024). Sign advantage: Both children and adults’ spatial expressions in sign are more informative than those in speech and gestures combined. Journal of Child Language, 51(4), 876-902. doi:10.1017/S0305000922000642.

    Abstract

    Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.
  • Lee, S.-H.-y., Ünal, E., & Papafragou, A. (2024). Forming event units in language and cognition: A cross-linguistic investigation. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 1885-1892).

    Abstract

    Humans are surrounded by dynamic, continuous streams of stimuli, yet the human mind segments these stimuli and organizes them into discrete event units. Theories of language production assume that segmenting and construing an event provides a starting point for speaking about the event (Levelt, 1989; Konopka & Brown-Schmidt, 2018). However, the precise units of event representation and their mapping to language remain elusive. In this work, we examine event unit formation in linguistic and conceptual event representations. Given cross-linguistic differences in motion event encoding (satellite vs. verb-framed languages), we investigate the extent to which such differences in forming linguistic motion event units affect how speakers of different languages form cognitive event units in non-linguistic tasks. We test English (satellite-framed) and Turkish (verb-framed) speakers on verbal and non-verbal motion event tasks. Our results show that speakers do not rely on the same event unit representations when verbalizing motion vs. identifying motion event units in non-verbal tasks. Therefore, we suggest that conceptual and linguistic event representations are related but distinct levels of event structure.
  • Moser, C., Tarakçı, B., Ünal, E., & Grigoroglou, M. (2024). Multimodal Description of Instrument Events in Turkish and English. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 341-348).

    Abstract

    Daily experiences are conceptualized as events involving multiple participants and their relations (i.e., thematic roles). When describing events, speakers often do not include all event participants involved. Here, we explore how underlying conceptual requirements and language-specific encoding options influence the content of event descriptions in speech and gesture in two typologically different languages (English,
    Turkish). Focusing on conceptually peripheral instruments whose status is highly debated, we manipulated the conceptual status of event participants by including events that ‘require’ or ‘allow’ otherwise syntactically optional instruments. Results showed that the require-allow distinction did not manifest uniformly in Turkish and English in speech, gesture, or when both modalities were considered. However, mention of highly optional event participants (e.g., allowed instruments) was
    affected by language-specific syntactic encoding options. We conclude that, under more naturalistic elicitation conditions, planning descriptions of instrument events is more heavily affected by language-specific encoding than conceptual prominence of the roles.
  • Tarakçı, B., Barış, C., & Ünal, E. (2024). Boundednes is represented in visual and auditory event cognition. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 2612-2618).

    Abstract

    Viewers are sensitive to the distinction between visual events with an internal structure leading to a well-defined endpoint (bounded events) and events lacking this structure and a well-defined endpoint (unbounded events). Here, we asked whether boundedness could be represented in the auditory modality in a way similar to the visual modality. To investigate this question, we trained participants with visual and auditory events on bounded or unbounded event categories in a category identification task. Later, we tested whether they could abstract the internal temporal structure of events and extend the (un)boundedness category to new examples in the same modality. These findings suggest that the principles and constraints that apply to the basic units of human experience in the visual modality have their counterparts in the auditory modality.
  • Tınaz, B., & Ünal, E. (2024). Event segmentation in language and cognition. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 184-191).

    Abstract

    We examine the relation between event segmentation in language and cognition in the domain of motion events, focusing on Turkish, a verb-framed language that segments motion paths in separate linguistic units (verb clauses). We compare motion events that have a path change to those that did not have a path change. In the linguistic task, participants were more likely to use multiple verb phrases when describing events that had a path change compared to those that did not have a path change. In the non-linguistic Dwell Time task, participants viewed self-paced slideshows of still images sampled from the motion event videos in the linguistic task. Dwell times for slides corresponding to path changes were not significantly longer than those for temporally similar slides in the events without a path change. These findings suggest that event units in language may not have strong and stable influences on event segmentation in cognition.
  • Ünal, E., Wilson, F., Trueswell, J., & Papafragou, A. (2024). Asymmetries in encoding event roles: Evidence from language and cognition. Cognition, 250: 105868. doi:10.1016/j.cognition.2024.105868.

    Abstract

    It has long been hypothesized that the linguistic structure of events, including event participants and their relative prominence, draws on the non-linguistic nature of events and the roles that these events license. However, the precise relation between the prominence of event participants in language and cognition has not been tested experimentally in a systematic way. Here we address this gap. In four experiments, we investigate the relative prominence of (animate) Agents, Patients, Goals and Instruments in the linguistic encoding of complex events and the prominence of these event roles in cognition as measured by visual search and change blindness tasks. The relative prominence of these event roles was largely similar—though not identical—across linguistic and non-linguistic measures. Across linguistic and non-linguistic tasks, Patients were more salient than Goals, which were more salient than Instruments. (Animate) Agents were more salient than Patients in linguistic descriptions and visual search; however, this asymmetrical pattern did not emerge in change detection. Overall, our results reveal homologies between the linguistic and non-linguistic prominence of individual event participants, thereby lending support to the claim that the linguistic structure of events builds on underlying conceptual event representations. We discuss implications of these findings for linguistic theory and theories of event cognition.
  • Ünal, E., Mamus, E., & Özyürek, A. (2024). Multimodal encoding of motion events in speech, gesture, and cognition. Language and Cognition, 16(4), 785-804. doi:10.1017/langcog.2023.61.

    Abstract

    How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy’s typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner.
  • Ünal, E., & Papafragou, A. (2018). Evidentials, information sources and cognition. In A. Y. Aikhenvald (Ed.), The Oxford Handbook of Evidentiality (pp. 175-184). Oxford University Press.
  • Ünal, E., & Papafragou, A. (2018). The relation between language and mental state reasoning. In J. Proust, & M. Fortier (Eds.), Metacognitive diversity: An interdisciplinary approach (pp. 153-169). Oxford: Oxford University Press.

Share this page