Displaying 1 - 13 of 13
-
Karadöller, D. Z., Sümer, B., Ünal, E., & Özyürek, A. (2024). Sign advantage: Both children and adults’ spatial expressions in sign are more informative than those in speech and gestures combined. Journal of Child Language, 51(4), 876-902. doi:10.1017/S0305000922000642.
Abstract
Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development. -
Lee, S.-H.-y., Ünal, E., & Papafragou, A. (2024). Forming event units in language and cognition: A cross-linguistic investigation. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 1885-1892).Abstract
Humans are surrounded by dynamic, continuous streams of stimuli, yet the human mind segments these stimuli and organizes them into discrete event units. Theories of language production assume that segmenting and construing an event provides a starting point for speaking about the event (Levelt, 1989; Konopka & Brown-Schmidt, 2018). However, the precise units of event representation and their mapping to language remain elusive. In this work, we examine event unit formation in linguistic and conceptual event representations. Given cross-linguistic differences in motion event encoding (satellite vs. verb-framed languages), we investigate the extent to which such differences in forming linguistic motion event units affect how speakers of different languages form cognitive event units in non-linguistic tasks. We test English (satellite-framed) and Turkish (verb-framed) speakers on verbal and non-verbal motion event tasks. Our results show that speakers do not rely on the same event unit representations when verbalizing motion vs. identifying motion event units in non-verbal tasks. Therefore, we suggest that conceptual and linguistic event representations are related but distinct levels of event structure.Additional information
https://escholarship.org/uc/item/8wq230f5 -
Moser, C., Tarakçı, B., Ünal, E., & Grigoroglou, M. (2024). Multimodal Description of Instrument Events in Turkish and English. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 341-348).Abstract
Daily experiences are conceptualized as events involving multiple participants and their relations (i.e., thematic roles). When describing events, speakers often do not include all event participants involved. Here, we explore how underlying conceptual requirements and language-specific encoding options influence the content of event descriptions in speech and gesture in two typologically different languages (English,
Turkish). Focusing on conceptually peripheral instruments whose status is highly debated, we manipulated the conceptual status of event participants by including events that ‘require’ or ‘allow’ otherwise syntactically optional instruments. Results showed that the require-allow distinction did not manifest uniformly in Turkish and English in speech, gesture, or when both modalities were considered. However, mention of highly optional event participants (e.g., allowed instruments) was
affected by language-specific syntactic encoding options. We conclude that, under more naturalistic elicitation conditions, planning descriptions of instrument events is more heavily affected by language-specific encoding than conceptual prominence of the roles.Additional information
https://escholarship.org/uc/item/31h4s3qp -
Tarakçı, B., Barış, C., & Ünal, E. (2024). Boundednes is represented in visual and auditory event cognition. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 2612-2618).Abstract
Viewers are sensitive to the distinction between visual events with an internal structure leading to a well-defined endpoint (bounded events) and events lacking this structure and a well-defined endpoint (unbounded events). Here, we asked whether boundedness could be represented in the auditory modality in a way similar to the visual modality. To investigate this question, we trained participants with visual and auditory events on bounded or unbounded event categories in a category identification task. Later, we tested whether they could abstract the internal temporal structure of events and extend the (un)boundedness category to new examples in the same modality. These findings suggest that the principles and constraints that apply to the basic units of human experience in the visual modality have their counterparts in the auditory modality.Additional information
https://escholarship.org/uc/item/15x9f213 -
Tınaz, B., & Ünal, E. (2024). Event segmentation in language and cognition. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 184-191).Abstract
We examine the relation between event segmentation in language and cognition in the domain of motion events, focusing on Turkish, a verb-framed language that segments motion paths in separate linguistic units (verb clauses). We compare motion events that have a path change to those that did not have a path change. In the linguistic task, participants were more likely to use multiple verb phrases when describing events that had a path change compared to those that did not have a path change. In the non-linguistic Dwell Time task, participants viewed self-paced slideshows of still images sampled from the motion event videos in the linguistic task. Dwell times for slides corresponding to path changes were not significantly longer than those for temporally similar slides in the events without a path change. These findings suggest that event units in language may not have strong and stable influences on event segmentation in cognition.Additional information
https://escholarship.org/uc/item/6nm5b85t -
Ünal, E., Wilson, F., Trueswell, J., & Papafragou, A. (2024). Asymmetries in encoding event roles: Evidence from language and cognition. Cognition, 250: 105868. doi:10.1016/j.cognition.2024.105868.
Abstract
It has long been hypothesized that the linguistic structure of events, including event participants and their relative prominence, draws on the non-linguistic nature of events and the roles that these events license. However, the precise relation between the prominence of event participants in language and cognition has not been tested experimentally in a systematic way. Here we address this gap. In four experiments, we investigate the relative prominence of (animate) Agents, Patients, Goals and Instruments in the linguistic encoding of complex events and the prominence of these event roles in cognition as measured by visual search and change blindness tasks. The relative prominence of these event roles was largely similar—though not identical—across linguistic and non-linguistic measures. Across linguistic and non-linguistic tasks, Patients were more salient than Goals, which were more salient than Instruments. (Animate) Agents were more salient than Patients in linguistic descriptions and visual search; however, this asymmetrical pattern did not emerge in change detection. Overall, our results reveal homologies between the linguistic and non-linguistic prominence of individual event participants, thereby lending support to the claim that the linguistic structure of events builds on underlying conceptual event representations. We discuss implications of these findings for linguistic theory and theories of event cognition. -
Ünal, E., Mamus, E., & Özyürek, A. (2024). Multimodal encoding of motion events in speech, gesture, and cognition. Language and Cognition, 16(4), 785-804. doi:10.1017/langcog.2023.61.
Abstract
How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy’s typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner. -
Avcılar, G., & Ünal, E. (2022). Linguistic encoding of inferential evidence for events. In J. Culbertson, A. Perfors, H. Rabagliati, & V. Ramenzoni (
Eds. ), Proceedings of the 44th Annual Conference of the Cognitive Science Society (CogSci 2022) (pp. 2825-2830).Abstract
How people learn about events often varies with some events perceived in their entirety and others are inferred based on the available evidence. Here, we investigate how children and adults linguistically encode the sources of their event knowledge. We focus on Turkish – a language that obligatorily encodes source of information for past events using two evidentiality markers. Children (4- to 5-year-olds and 6- to 7- year-olds) and adults watched and described events that they directly saw or inferred based on visual cues with manipulated degrees of indirectness. Overall, participants modified the evidential marking in their descriptions depending on (a) whether they saw or inferred the event and (b) the indirectness of the visual cues giving rise to an inference. There were no differences across age groups. These findings suggest that Turkish-speaking adults’ and children’s use of evidential markers are sensitive to the indirectness of the inferential evidence for events.Additional information
https://escholarship.org/uc/item/8ft2x61c -
Ter Bekke, M., Özyürek, A., & Ünal, E. (2022). Speaking but not gesturing predicts event memory: A cross-linguistic comparison. Language and Cognition, 14(3), 362-384. doi:10.1017/langcog.2022.3.
Abstract
Every day people see, describe, and remember motion events. However, the relation between multimodal encoding of motion events in speech and gesture, and memory is not yet fully understood. Moreover, whether language typology modulates this relation remains to be tested. This study investigates whether the type of motion event information (path or manner) mentioned in speech and gesture predicts which information is remembered and whether this varies across speakers of typologically different languages. Dutch- and Turkish-speakers watched and described motion events and completed a surprise recognition memory task. For both Dutch- and Turkish-speakers, manner memory was at chance level. Participants who mentioned path in speech during encoding were more accurate at detecting changes to the path in the memory task. The relation between mentioning path in speech and path memory did not vary cross-linguistically. Finally, the co-speech gesture did not predict memory above mentioning path in speech. These findings suggest that how speakers describe a motion event in speech is more important than the typology of the speakers’ native language in predicting motion event memory. The motion event videos are available for download for future research at https://osf.io/p8cas/.Additional information
S1866980822000035sup001.docx -
Ünal, E., Manhardt, F., & Özyürek, A. (2022). Speaking and gesturing guide event perception during message conceptualization: Evidence from eye movements. Cognition, 225: 105127. doi:10.1016/j.cognition.2022.105127.
Abstract
Speakers’ visual attention to events is guided by linguistic conceptualization of information in spoken language
production and in language-specific ways. Does production of language-specific co-speech gestures further guide
speakers’ visual attention during message preparation? Here, we examine the link between visual attention and
multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers’ speech and gesture
show language specificity with path of motion mostly expressed within the main verb accompanied by path
gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non-
linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path
over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative
attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in
the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not.
Results strongly suggest that speakers’ visual attention is guided by language-specific event encoding not only in
speech but also in gesture. This provides evidence consistent with models that propose integration of speech and
gesture at the conceptualization level of language production and suggests that the links between the eye and the
mouth may be extended to the eye and the hand. -
Ünal, E., Pinto, A., Bunger, A., & Papafragou, A. (2016). Monitoring sources of event memories: A cross-linguistic investigation. Journal of Memory and Language, 87, 157-176. doi:10.1016/j.jml.2015.10.009.
Abstract
When monitoring the origins of their memories, people tend to mistakenly attribute mem- ories generated from internal processes (e.g., imagination, visualization) to perception. Here, we ask whether speaking a language that obligatorily encodes the source of informa- tion might help prevent such errors. We compare speakers of English to speakers of Turkish, a language that obligatorily encodes information source (direct/perceptual vs. indirect/hearsay or inference) for past events. In our experiments, participants reported having seen events that they had only inferred from post-event visual evidence. In general, error rates were higher when visual evidence that gave rise to inferences was relatively close to direct visual evidence. Furthermore, errors persisted even when participants were asked to report the specific sources of their memories. Crucially, these error patterns were equivalent across language groups, suggesting that speaking a language that obligatorily encodes source of information does not increase sensitivity to the distinction between per- ception and inference in event memory. -
Ünal, E., & Papafragou, A. (2016). Interactions between language and mental representations. Language Learning, 66(3), 554-580. doi:10.1111/lang.12188.
Abstract
It has long been recognized that language interacts with visual and spatial processes. However, the nature and extent of these interactions are widely debated. The goal of this article is to review empirical findings across several domains to understand whether language affects the way speakers conceptualize the world even when they are not speaking or understanding speech. A second goal of the present review is to shed light on the mechanisms through which effects of language are transmitted. Across domains, there is growing support for the idea that although language does not lead to long-lasting changes in mental representations, it exerts powerful influences during momentary mental computations by either modulating attention or augmenting representational power -
Ünal, E., & Papafragou, A. (2016). Production--comprehension asymmetries and the acquisition of evidential morphology. Journal of Memory and Language, 89, 179-199. doi:10.1016/j.jml.2015.12.001.
Abstract
Although children typically comprehend the links between specific forms and their mean- ings before they produce the forms themselves, the opposite pattern also occurs. The nat- ure of these ‘reverse asymmetries’ between production and comprehension remains debated. Here we focus on a striking case where production precedes comprehension in the acquisition of Turkish evidential morphology and explore theoretical explanations of this asymmetry. We show that 3- to 6-year-old Turkish learners produce evidential mor- phemes accurately (Experiment 1) but have difficulty with evidential comprehension (Experiment 2). Furthermore, comprehension failures persist across multiple tasks (Experiments 3–4). We suggest that evidential comprehension is delayed by the develop- ment of mental perspective-taking abilities needed to compute others’ knowledge sources. In support for this hypothesis, we find that children have difficulty reasoning about others’ evidence in non-linguistic tasks but the difficulty disappears when the tasks involve accessing one’s own evidential sources (Experiment 5)
Share this page