Displaying 1 - 12 of 12
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Ozyurek, A. (2021). Spatial language use predicts spatial memory of children: Evidence from sign, speech, and speech-plus-gesture. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 672-678). Vienna: Cognitive Science Society.

    Abstract

    There is a strong relation between children’s exposure to spatial terms and their later memory accuracy. In the current study, we tested whether the production of spatial terms by children themselves predicts memory accuracy and whether and how language modality of these encodings modulates memory accuracy differently. Hearing child speakers of Turkish and deaf child signers of Turkish Sign Language described pictures of objects in various spatial relations to each other and later tested for their memory accuracy of these pictures in a surprise memory task. We found that having described the spatial relation between the objects predicted better memory accuracy. However, the modality of these descriptions in sign, speech, or speech-plus-gesture did not reveal differences in memory accuracy. We discuss the implications of these findings for the relation between spatial language, memory, and the modality of encoding.
  • Mamus, E., Speed, L. J., Ozyurek, A., & Majid, A. (2021). Sensory modality of input influences encoding of motion events in speech but not co-speech gestures. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 376-382). Vienna: Cognitive Science Society.

    Abstract

    Visual and auditory channels have different affordances and this is mirrored in what information is available for linguistic encoding. The visual channel has high spatial acuity, whereas the auditory channel has better temporal acuity. These differences may lead to different conceptualizations of events and affect multimodal language production. Previous studies of motion events typically present visual input to elicit speech and gesture. The present study compared events presented as audio- only, visual-only, or multimodal (visual+audio) input and assessed speech and co-speech gesture for path and manner of motion in Turkish. Speakers with audio-only input mentioned path more and manner less in verbal descriptions, compared to speakers who had visual input. There was no difference in the type or frequency of gestures across conditions, and gestures were dominated by path-only gestures. This suggests that input modality influences speakers’ encoding of path and manner of motion events in speech, but not in co-speech gestures.
  • Manhardt, F. (2021). A tale of two modalities. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Pouw, W., Wit, J., Bögels, S., Rasenberg, M., Milivojevic, B., & Ozyurek, A. (2021). Semantically related gestures move alike: Towards a distributional semantics of gesture kinematics. In V. G. Duffy (Ed.), Digital human modeling and applications in health, safety, ergonomics and risk management. human body, motion and behavior:12th International Conference, DHM 2021, Held as Part of the 23rd HCI International Conference, HCII 2021 (pp. 269-287). Berlin: Springer. doi:10.1007/978-3-030-77817-0_20.
  • Schubotz, L. (2021). Effects of aging and cognitive abilities on multimodal language production and comprehension in context. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Azar, Z. (2020). Effect of language contact on speech and gesture: The case of Turkish-Dutch bilinguals in the Netherlands. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ozyurek, A. (2020). From hands to brains: How does human body talk, think and interact in face-to-face language use? In K. Truong, D. Heylen, & M. Czerwinski (Eds.), ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 1-2). New York, NY, USA: Association for Computing Machinery. doi:10.1145/3382507.3419442.
  • Rasenberg, M., Dingemanse, M., & Ozyurek, A. (2020). Lexical and gestural alignment in interaction and the emergence of novel shared symbols. In A. Ravignani, C. Barbieri, M. Flaherty, Y. Jadoul, E. Lattenkamp, H. Little, M. Martins, K. Mudd, & T. Verhoef (Eds.), The Evolution of Language: Proceedings of the 13th International Conference (Evolang13) (pp. 356-358). Nijmegen: The Evolution of Language Conferences.
  • Van Arkel, J., Woensdregt, M., Dingemanse, M., & Blokpoel, M. (2020). A simple repair mechanism can alleviate computational demands of pragmatic reasoning: simulations and complexity analysis. In R. Fernández, & T. Linzen (Eds.), Proceedings of the 24th Conference on Computational Natural Language Learning (CoNLL 2020) (pp. 177-194). Stroudsburg, PA, USA: The Association for Computational Linguistics. doi:10.18653/v1/2020.conll-1.14.

    Abstract

    How can people communicate successfully while keeping resource costs low in the face of ambiguity? We present a principled theoretical analysis comparing two strategies for disambiguation in communication: (i) pragmatic reasoning, where communicators reason about each other, and (ii) other-initiated repair, where communicators signal and resolve trouble interactively. Using agent-based simulations and computational complexity analyses, we compare the efficiency of these strategies in terms of communicative success, computation cost and interaction cost. We show that agents with a simple repair mechanism can increase efficiency, compared to pragmatic agents, by reducing their computational burden at the cost of longer interactions. We also find that efficiency is highly contingent on the mechanism, highlighting the importance of explicit formalisation and computational rigour.
  • Ortega, G., Sumer, B., & Ozyurek, A. (2014). Type of iconicity matters: Bias for action-based signs in sign language acquisition. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1114-1119). Austin, Tx: Cognitive Science Society.

    Abstract

    Early studies investigating sign language acquisition claimed that signs whose structures are motivated by the form of their referent (iconic) are not favoured in language development. However, recent work has shown that the first signs in deaf children’s lexicon are iconic. In this paper we go a step further and ask whether different types of iconicity modulate learning sign-referent links. Results from a picture description task indicate that children and adults used signs with two possible variants differentially. While children signing to adults favoured variants that map onto actions associated with a referent (action signs), adults signing to another adult produced variants that map onto objects’ perceptual features (perceptual signs). Parents interacting with children used more action variants than signers in adult-adult interactions. These results are in line with claims that language development is tightly linked to motor experience and that iconicity can be a communicative strategy in parental input.
  • Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1144-1149). Austin, Tx: Cognitive Science Society.
  • Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.

    Abstract

    Developmental studies show that it takes longer for children learning spoken languages to acquire viewpointdependent spatial relations (e.g., left-right, front-behind), compared to ones that are not viewpoint-dependent (e.g., in, on, under). The current study investigates how children learn to express viewpoint-dependent relations in a sign language where depicted spatial relations can be communicated in an analogue manner in the space in front of the body or by using body-anchored signs (e.g., tapping the right and left hand/arm to mean left and right). Our results indicate that the visual-spatial modality might have a facilitating effect on learning to express these spatial relations (especially in encoding of left-right) in a sign language (i.e., Turkish Sign Language) compared to a spoken language (i.e., Turkish).

Share this page