Displaying 1 - 11 of 11
-
Karadöller, D. Z., Sümer, B., Ünal, E., & Özyürek, A. (2024). Sign advantage: Both children and adults’ spatial expressions in sign are more informative than those in speech and gestures combined. Journal of Child Language, 51(4), 876-902. doi:10.1017/S0305000922000642.
Abstract
Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development. -
Lee, S.-H.-y., Ünal, E., & Papafragou, A. (2024). Forming event units in language and cognition: A cross-linguistic investigation. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 1885-1892).Abstract
Humans are surrounded by dynamic, continuous streams of stimuli, yet the human mind segments these stimuli and organizes them into discrete event units. Theories of language production assume that segmenting and construing an event provides a starting point for speaking about the event (Levelt, 1989; Konopka & Brown-Schmidt, 2018). However, the precise units of event representation and their mapping to language remain elusive. In this work, we examine event unit formation in linguistic and conceptual event representations. Given cross-linguistic differences in motion event encoding (satellite vs. verb-framed languages), we investigate the extent to which such differences in forming linguistic motion event units affect how speakers of different languages form cognitive event units in non-linguistic tasks. We test English (satellite-framed) and Turkish (verb-framed) speakers on verbal and non-verbal motion event tasks. Our results show that speakers do not rely on the same event unit representations when verbalizing motion vs. identifying motion event units in non-verbal tasks. Therefore, we suggest that conceptual and linguistic event representations are related but distinct levels of event structure.Additional information
https://escholarship.org/uc/item/8wq230f5 -
Moser, C., Tarakçı, B., Ünal, E., & Grigoroglou, M. (2024). Multimodal Description of Instrument Events in Turkish and English. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 341-348).Abstract
Daily experiences are conceptualized as events involving multiple participants and their relations (i.e., thematic roles). When describing events, speakers often do not include all event participants involved. Here, we explore how underlying conceptual requirements and language-specific encoding options influence the content of event descriptions in speech and gesture in two typologically different languages (English,
Turkish). Focusing on conceptually peripheral instruments whose status is highly debated, we manipulated the conceptual status of event participants by including events that ‘require’ or ‘allow’ otherwise syntactically optional instruments. Results showed that the require-allow distinction did not manifest uniformly in Turkish and English in speech, gesture, or when both modalities were considered. However, mention of highly optional event participants (e.g., allowed instruments) was
affected by language-specific syntactic encoding options. We conclude that, under more naturalistic elicitation conditions, planning descriptions of instrument events is more heavily affected by language-specific encoding than conceptual prominence of the roles.Additional information
https://escholarship.org/uc/item/31h4s3qp -
Tarakçı, B., Barış, C., & Ünal, E. (2024). Boundednes is represented in visual and auditory event cognition. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 2612-2618).Abstract
Viewers are sensitive to the distinction between visual events with an internal structure leading to a well-defined endpoint (bounded events) and events lacking this structure and a well-defined endpoint (unbounded events). Here, we asked whether boundedness could be represented in the auditory modality in a way similar to the visual modality. To investigate this question, we trained participants with visual and auditory events on bounded or unbounded event categories in a category identification task. Later, we tested whether they could abstract the internal temporal structure of events and extend the (un)boundedness category to new examples in the same modality. These findings suggest that the principles and constraints that apply to the basic units of human experience in the visual modality have their counterparts in the auditory modality.Additional information
https://escholarship.org/uc/item/15x9f213 -
Tınaz, B., & Ünal, E. (2024). Event segmentation in language and cognition. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (
Eds. ), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 184-191).Abstract
We examine the relation between event segmentation in language and cognition in the domain of motion events, focusing on Turkish, a verb-framed language that segments motion paths in separate linguistic units (verb clauses). We compare motion events that have a path change to those that did not have a path change. In the linguistic task, participants were more likely to use multiple verb phrases when describing events that had a path change compared to those that did not have a path change. In the non-linguistic Dwell Time task, participants viewed self-paced slideshows of still images sampled from the motion event videos in the linguistic task. Dwell times for slides corresponding to path changes were not significantly longer than those for temporally similar slides in the events without a path change. These findings suggest that event units in language may not have strong and stable influences on event segmentation in cognition.Additional information
https://escholarship.org/uc/item/6nm5b85t -
Ünal, E., Wilson, F., Trueswell, J., & Papafragou, A. (2024). Asymmetries in encoding event roles: Evidence from language and cognition. Cognition, 250: 105868. doi:10.1016/j.cognition.2024.105868.
Abstract
It has long been hypothesized that the linguistic structure of events, including event participants and their relative prominence, draws on the non-linguistic nature of events and the roles that these events license. However, the precise relation between the prominence of event participants in language and cognition has not been tested experimentally in a systematic way. Here we address this gap. In four experiments, we investigate the relative prominence of (animate) Agents, Patients, Goals and Instruments in the linguistic encoding of complex events and the prominence of these event roles in cognition as measured by visual search and change blindness tasks. The relative prominence of these event roles was largely similar—though not identical—across linguistic and non-linguistic measures. Across linguistic and non-linguistic tasks, Patients were more salient than Goals, which were more salient than Instruments. (Animate) Agents were more salient than Patients in linguistic descriptions and visual search; however, this asymmetrical pattern did not emerge in change detection. Overall, our results reveal homologies between the linguistic and non-linguistic prominence of individual event participants, thereby lending support to the claim that the linguistic structure of events builds on underlying conceptual event representations. We discuss implications of these findings for linguistic theory and theories of event cognition. -
Ünal, E., Mamus, E., & Özyürek, A. (2024). Multimodal encoding of motion events in speech, gesture, and cognition. Language and Cognition, 16(4), 785-804. doi:10.1017/langcog.2023.61.
Abstract
How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy’s typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner. -
Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2023). Late sign language exposure does not modulate the relation between spatial language and spatial memory in deaf children and adults. Memory & Cognition, 51, 582-600. doi:10.3758/s13421-022-01281-7.
Abstract
Prior work with hearing children acquiring a spoken language as their first language shows that spatial language and cognition are related systems and spatial language use predicts spatial memory. Here, we further investigate the extent of this relationship in signing deaf children and adults and ask if late sign language exposure, as well as the frequency and the type of spatial language use that might be affected by late exposure, modulate subsequent memory for spatial relations. To do so, we compared spatial language and memory of 8-year-old late-signing children (after 2 years of exposure to a sign language at the school for the deaf) and late-signing adults to their native-signing counterparts. We elicited picture descriptions of Left-Right relations in Turkish Sign Language (Türk İşaret Dili) and measured the subsequent recognition memory accuracy of the described pictures. Results showed that late-signing adults and children were similar to their native-signing counterparts in how often they encoded the spatial relation. However, late-signing adults but not children differed from their native-signing counterparts in the type of spatial language they used. However, neither late sign language exposure nor the frequency and type of spatial language use modulated spatial memory accuracy. Therefore, even though late language exposure seems to influence the type of spatial language use, this does not predict subsequent memory for spatial relations. We discuss the implications of these findings based on the theories concerning the correspondence between spatial language and cognition as related or rather independent systems. -
Karadöller, D. Z., Sumer, B., Ünal, E., & Ozyurek, A. (2021). Spatial language use predicts spatial memory of children: Evidence from sign, speech, and speech-plus-gesture. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (
Eds. ), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 672-678). Vienna: Cognitive Science Society.Abstract
There is a strong relation between children’s exposure to
spatial terms and their later memory accuracy. In the current
study, we tested whether the production of spatial terms by
children themselves predicts memory accuracy and whether
and how language modality of these encodings modulates
memory accuracy differently. Hearing child speakers of
Turkish and deaf child signers of Turkish Sign Language
described pictures of objects in various spatial relations to each
other and later tested for their memory accuracy of these
pictures in a surprise memory task. We found that having
described the spatial relation between the objects predicted
better memory accuracy. However, the modality of these
descriptions in sign, speech, or speech-plus-gesture did not
reveal differences in memory accuracy. We discuss the
implications of these findings for the relation between spatial
language, memory, and the modality of encoding. -
Ünal, E., Ji, Y., & Papafragou, A. (2021). From event representation to linguistic meaning. Topics in Cognitive Science, 13(1), 224-242. doi:10.1111/tops.12475.
Abstract
A fundamental aspect of human cognition is the ability to parse our constantly unfolding expe- rience into meaningful representations of dynamic events and to communicate about these events with others. How do we communicate about events we have experienced? Influential theories of language production assume that the formulation and articulation of a linguistic message is pre- ceded by preverbal apprehension that captures core aspects of the event. Yet the nature of these preverbal event representations and the way they are mapped onto language are currently not well understood. Here, we review recent evidence on the link between event conceptualization and lan- guage, focusing on two core aspects of event representation: event roles and event boundaries. Empirical evidence in both domains shows that the cognitive representation of events aligns with the way these aspects of events are encoded in language, providing support for the presence of deep homologies between linguistic and cognitive event structure. -
Ünal, E., Richards, C., Trueswell, J., & Papafragou, A. (2021). Representing agents, patients, goals and instruments in causative events: A cross-linguistic investigation of early language and cognition. Developmental Science, 24(6): e13116. doi:10.1111/desc.13116.
Abstract
Although it is widely assumed that the linguistic description of events is based on a structured representation of event components at the perceptual/conceptual level, lit- tle empirical work has tested this assumption directly. Here, we test the connection between language and perception/cognition cross-linguistically, focusing on the rela- tive salience of causative event components in language and cognition. We draw on evidence from preschoolers speaking English or Turkish. In a picture description task, Turkish-speaking 3-5-year-olds mentioned Agents less than their English-speaking peers (Turkish allows subject drop); furthermore, both language groups mentioned Patients more frequently than Goals, and Instruments less frequently than either Patients or Goals. In a change blindness task, both language groups were equally accurate at detecting changes to Agents (despite surface differences in Agent men- tions). The remaining components also behaved similarly: both language groups were less accurate in detecting changes to Instruments than either Patients or Goals (even though Turkish-speaking preschoolers were less accurate overall than their English- speaking peers). To our knowledge, this is the first study offering evidence for a strong—even though not strict—homology between linguistic and conceptual event roles in young learners cross-linguistically.
Share this page