Displaying 1 - 17 of 17
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2021). Effects and non-effects of late language exposure on spatial language development: Evidence from deaf adults and children. Language Learning and Development, 17(1), 1-25. doi:10.1080/15475441.2020.1823846.

    Abstract

    Late exposure to the first language, as in the case of deaf children with hearing parents, hinders the production of linguistic expressions, even in adulthood. Less is known about the development of language soon after language exposure and if late exposure hinders all domains of language in children and adults. We compared late signing adults and children (MAge = 8;5) 2 years after exposure to sign language, to their age-matched native signing peers in expressions of two types of locative relations that are acquired in certain cognitive-developmental order: view-independent (IN-ON-UNDER) and view-dependent (LEFT-RIGHT). Late signing children and adults differed from native signers in their use of linguistic devices for view-dependent relations but not for view-independent relations. These effects were also modulated by the morphological complexity. Hindering effects of late language exposure on the development of language in children and adults are not absolute but are modulated by cognitive and linguistic complexity.
  • Manhardt, F., Brouwer, S., & Ozyurek, A. (2021). A tale of two modalities: Sign and speech influence in each other in bimodal bilinguals. Psychological Science, 32(3), 424-436. doi:10.1177/0956797620968789.

    Abstract

    Bimodal bilinguals are hearing individuals fluent in a sign and a spoken language. Can the two languages influence each other in such individuals despite differences in the visual (sign) and vocal (speech) modalities of expression? We investigated cross-linguistic influences on bimodal bilinguals’ expression of spatial relations. Unlike spoken languages, sign uses iconic linguistic forms that resemble physical features of objects in a spatial relation and thus expresses specific semantic information. Hearing bimodal bilinguals (n = 21) fluent in Dutch and Sign Language of the Netherlands and their hearing nonsigning and deaf signing peers (n = 20 each) described left/right relations between two objects. Bimodal bilinguals expressed more specific information about physical features of objects in speech than nonsigners, showing influence from sign language. They also used fewer iconic signs with specific semantic information than deaf signers, demonstrating influence from speech. Bimodal bilinguals’ speech and signs are shaped by two languages from different modalities.

    Additional information

    supplementary materials
  • Nielsen, A. K. S., & Dingemanse, M. (2021). Iconicity in word learning and beyond: A critical review. Language and Speech, 64(1), 52-72. doi:10.1177/0023830920914339.

    Abstract

    Interest in iconicity (the resemblance-based mapping between aspects of form and meaning) is in the midst of a resurgence, and a prominent focus in the field has been the possible role of iconicity in language learning. Here we critically review theory and empirical findings in this domain. We distinguish local learning enhancement (where the iconicity of certain lexical items influences the learning of those items) and general learning enhancement (where the iconicity of certain lexical items influences the later learning of non-iconic items or systems). We find that evidence for local learning enhancement is quite strong, though not as clear cut as it is often described and based on a limited sample of languages. Despite common claims about broader facilitatory effects of iconicity on learning, we find that current evidence for general learning enhancement is lacking. We suggest a number of productive avenues for future research and specify what types of evidence would be required to show a role for iconicity in general learning enhancement. We also review evidence for functions of iconicity beyond word learning: iconicity enhances comprehension by providing complementary representations, supports communication about sensory imagery, and expresses affective meanings. Even if learning benefits may be modest or cross-linguistically varied, on balance, iconicity emerges as a vital aspect of language.
  • Pouw, W., De Jonge-Hoekstra, L., Harrison, S. J., Paxton, A., & Dixon, J. A. (2021). Gesture-speech physics in fluent speech and rhythmic upper limb movements. Annals of the New York Academy of Sciences, 1491(1), 89-105. doi:10.1111/nyas.14532.

    Abstract

    Communicative hand gestures are often coordinated with prosodic aspects of speech, and salient moments of gestural movement (e.g., quick changes in speed) often co-occur with salient moments in speech (e.g., near peaks in fundamental frequency and intensity). A common understanding is that such gesture and speech coordination is culturally and cognitively acquired, rather than having a biological basis. Recently, however, the biomechanical physical coupling of arm movements to speech movements has been identified as a potentially important factor in understanding the emergence of gesture-speech coordination. Specifically, in the case of steady-state vocalization and mono-syllable utterances, forces produced during gesturing are transferred onto the tensioned body, leading to changes in respiratory-related activity and thereby affecting vocalization F0 and intensity. In the current experiment (N = 37), we extend this previous line of work to show that gesture-speech physics impacts fluent speech, too. Compared with non-movement, participants who are producing fluent self-formulated speech, while rhythmically moving their limbs, demonstrate heightened F0 and amplitude envelope, and such effects are more pronounced for higher-impulse arm versus lower-impulse wrist movement. We replicate that acoustic peaks arise especially during moments of peak-impulse (i.e., the beat) of the movement, namely around deceleration phases of the movement. Finally, higher deceleration rates of higher-mass arm movements were related to higher peaks in acoustics. These results confirm a role for physical-impulses of gesture affecting the speech system. We discuss the implications of gesture-speech physics for understanding of the emergence of communicative gesture, both ontogenetically and phylogenetically.

    Additional information

    data and analyses
  • Azar, Z., & Ozyurek, A. (2015). Discourse Management: Reference tracking in speech and gesture in Turkish narratives. Dutch Journal of Applied Linguistics, 4(2), 222-240. doi:10.1075/dujal.4.2.06aza.

    Abstract

    Speakers achieve coherence in discourse by alternating between differential lexical forms e.g. noun phrase, pronoun, and null form in accordance with the accessibility of the entities they refer to, i.e. whether they introduce an entity into discourse for the first time or continue referring to an entity they already mentioned before. Moreover, tracking of entities in discourse is a multimodal phenomenon. Studies show that speakers are sensitive to the informational structure of discourse and use fuller forms (e.g. full noun phrases) in speech and gesture more when re-introducing an entity while they use attenuated forms (e.g. pronouns) in speech and gesture less when maintaining a referent. However, those studies focus mainly on non-pro-drop languages (e.g. English, German and French). The present study investigates whether the same pattern holds for pro-drop languages. It draws data from adult native speakers of Turkish using elicited narratives. We find that Turkish speakers mostly use fuller forms to code subject referents in re-introduction context and the null form in maintenance context and they point to gesture space for referents more in re-introduction context compared maintained context. Hence we provide supportive evidence for the reverse correlation between the accessibility of a discourse referent and its coding in speech and gesture. We also find that, as a novel contribution, third person pronoun is used in re-introduction context only when the referent was previously mentioned as the object argument of the immediately preceding clause.
  • Goldin-Meadow, S., Namboodiripad, S., Mylander, C., Ozyurek, A., & Sancar, B. (2015). The resilience of structure built around the predicate: Homesign gesture systems in Turkish and American deaf children. Journal of Cognition and Development, 16, 55-80. doi:10.1080/15248372.2013.803970.

    Abstract

    Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called homesigns, which have many of the properties of natural language—the so-called resilient properties of language. We explored the resilience of structure built around the predicate—in particular, how manner and path are mapped onto the verb—in homesign systems developed by deaf children in Turkey and the United States. We also asked whether the Turkish homesigners exhibit sentence-level structures previously identified as resilient in American and Chinese homesigners. We found that the Turkish and American deaf children used not only the same production probability and ordering patterns to indicate who does what to whom, but also used the same segmentation and conflation patterns to package manner and path. The gestures that the hearing parents produced did not, for the most part, display the patterns found in the children's gestures. Although cospeech gesture may provide the building blocks for homesign, it does not provide the blueprint for these resilient properties of language.
  • Holler, J., Kokal, I., Toni, I., Hagoort, P., Kelly, S. D., & Ozyurek, A. (2015). Eye’m talking to you: Speakers’ gaze direction modulates co-speech gesture processing in the right MTG. Social Cognitive & Affective Neuroscience, 10, 255-261. doi:10.1093/scan/nsu047.

    Abstract

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture. Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2015). The processing of speech, gesture and action during language comprehension. Psychonomic Bulletin & Review, 22, 517-523. doi:10.3758/s13423-014-0681-7.

    Abstract

    Hand gestures and speech form a single integrated system of meaning during language comprehension, but is gesture processed with speech in a unique fashion? We had subjects watch multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half of the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information contents were congruent, and for the other half, they were incongruent. For all subjects, stimuli in which the gestures and actions were incongruent with the speech produced more errors and longer response times than did stimuli that were congruent, but this effect was less prominent for speech-action stimuli than for speech-gesture stimuli. However, subjects focusing on visual targets were more accurate when processing actions than gestures. These results suggest that although actions may be easier to process than gestures, gestures may be more tightly tied to the processing of accompanying speech.
  • Ozyurek, A., Furman, R., & Goldin-Meadow, S. (2015). On the way to language: Event segmentation in homesign and gesture. Journal of Child Language, 42, 64-94. doi:10.1017/S0305000913000512.

    Abstract

    Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological and kinematic correlates of communicative intent in the planning and production of pointing gestures and speech. Journal of Cognitive Neuroscience, 27(12), 2352-2368. doi:10.1162/jocn_a_00865.

    Abstract

    In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.
  • Peeters, D., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological evidence for the role of shared space in online comprehension of spatial demonstratives. Cognition, 136, 64-84. doi:10.1016/j.cognition.2014.10.010.

    Abstract

    A fundamental property of language is that it can be used to refer to entities in the extra-linguistic physical context of a conversation in order to establish a joint focus of attention on a referent. Typological and psycholinguistic work across a wide range of languages has put forward at least two different theoretical views on demonstrative reference. Here we contrasted and tested these two accounts by investigating the electrophysiological brain activity underlying the construction of indexical meaning in comprehension. In two EEG experiments, participants watched pictures of a speaker who referred to one of two objects using speech and an index-finger pointing gesture. In contrast with separately collected native speakers’ linguistic intuitions, N400 effects showed a preference for a proximal demonstrative when speaker and addressee were in a face-to-face orientation and all possible referents were located in the shared space between them, irrespective of the physical proximity of the referent to the speaker. These findings reject egocentric proximity-based accounts of demonstrative reference, support a sociocentric approach to deixis, suggest that interlocutors construe a shared space during conversation, and imply that the psychological proximity of a referent may be more important than its physical proximity.
  • Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2015). Does space structure spatial language? A comparison of spatial expression across sign languages. Language, 91(3), 611-641.

    Abstract

    The spatial affordances of the visual modality give rise to a high degree of similarity between sign languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken languages. However, the possibility and nature of linguistic diversity in spatial encoding in sign languages has not been rigorously investigated by systematic crosslinguistic comparison. Here, we compare locative expression in two unrelated sign languages, Turkish Sign Language (Türk İşaret Dili, TİD) and German Sign Language (Deutsche Gebärdensprache, DGS), focusing on the expression of figure-ground (e.g. cup on table) and figure-figure (e.g. cup next to cup) relationships in a discourse context. In addition to similarities, we report qualitative and quantitative differences between the sign languages in the formal devices used (i.e. unimanual vs. bimanual; simultaneous vs. sequential) and in the degree of iconicity of the spatial devices. Our results suggest that sign languages may display more diversity in the spatial domain than has been previously assumed, and in a way more comparable with the diversity found in spoken languages. The study contributes to a more comprehensive understanding of how space gets encoded in language
  • Perniss, P. M., Ozyurek, A., & Morgan, G. (2015). The Influence of the visual modality on language structure and conventionalization: Insights from sign language and gesture. Topics in Cognitive Science, 7(1), 2-11. doi:10.1111/tops.12127.

    Abstract

    For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems.
  • Perniss, P. M., & Ozyurek, A. (2015). Visible cohesion: A comparison of reference tracking in sign, speech, and co-speech gesture. Topics in Cognitive Science, 7(1), 36-60. doi:10.1111/tops.12122.

    Abstract

    Establishing and maintaining reference is a crucial part of discourse. In spoken languages, differential linguistic devices mark referents occurring in different referential contexts, that is, introduction, maintenance, and re-introduction contexts. Speakers using gestures as well as users of sign languages have also been shown to mark referents differentially depending on the referential context. This article investigates the modality-specific contribution of the visual modality in marking referential context by providing a direct comparison between sign language (German Sign Language; DGS) and co-speech gesture with speech (German) in elicited narratives. Across all forms of expression, we find that referents in subject position are referred to with more marking material in re-introduction contexts compared to maintenance contexts. Furthermore, we find that spatial modification is used as a modality-specific strategy in both DGS and German co-speech gesture, and that the configuration of referent locations in sign space and gesture space corresponds in an iconic and consistent way to the locations of referents in the narrated event. However, we find that spatial modification is used in different ways for marking re-introduction and maintenance contexts in DGS and German co-speech gesture. The findings are discussed in relation to the unique contribution of the visual modality to reference tracking in discourse when it is used in a unimodal system with full linguistic structure (i.e., as in sign) versus in a bimodal system that is a composite of speech and gesture
  • Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.

    Abstract

    Previous research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.
  • Ozyurek, A. (2014). Hearing and seeing meaning in speech and gesture: Insights from brain and behaviour. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 369(1651): 20130296. doi:10.1098/rstb.2013.0296.

    Abstract

    As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.

Share this page