Displaying 1 - 18 of 18
Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2021). Effects and non-effects of late language exposure on spatial language development: Evidence from deaf adults and children. Language Learning and Development, 17(1), 1-25. doi:10.1080/15475441.2020.1823846.
AbstractLate exposure to the first language, as in the case of deaf children with hearing parents, hinders the production of linguistic expressions, even in adulthood. Less is known about the development of language soon after language exposure and if late exposure hinders all domains of language in children and adults. We compared late signing adults and children (MAge = 8;5) 2 years after exposure to sign language, to their age-matched native signing peers in expressions of two types of locative relations that are acquired in certain cognitive-developmental order: view-independent (IN-ON-UNDER) and view-dependent (LEFT-RIGHT). Late signing children and adults differed from native signers in their use of linguistic devices for view-dependent relations but not for view-independent relations. These effects were also modulated by the morphological complexity. Hindering effects of late language exposure on the development of language in children and adults are not absolute but are modulated by cognitive and linguistic complexity.
Manhardt, F., Brouwer, S., & Ozyurek, A. (2021). A tale of two modalities: Sign and speech influence in each other in bimodal bilinguals. Psychological Science, 32(3), 424-436. doi:10.1177/0956797620968789.
AbstractBimodal bilinguals are hearing individuals fluent in a sign and a spoken language. Can the two languages influence each other in such individuals despite differences in the visual (sign) and vocal (speech) modalities of expression? We investigated cross-linguistic influences on bimodal bilinguals’ expression of spatial relations. Unlike spoken languages, sign uses iconic linguistic forms that resemble physical features of objects in a spatial relation and thus expresses specific semantic information. Hearing bimodal bilinguals (n = 21) fluent in Dutch and Sign Language of the Netherlands and their hearing nonsigning and deaf signing peers (n = 20 each) described left/right relations between two objects. Bimodal bilinguals expressed more specific information about physical features of objects in speech than nonsigners, showing influence from sign language. They also used fewer iconic signs with specific semantic information than deaf signers, demonstrating influence from speech. Bimodal bilinguals’ speech and signs are shaped by two languages from different modalities.
Additional informationsupplementary materials
Nielsen, A. K. S., & Dingemanse, M. (2021). Iconicity in word learning and beyond: A critical review. Language and Speech, 64(1), 52-72. doi:10.1177/0023830920914339.
AbstractInterest in iconicity (the resemblance-based mapping between aspects of form and meaning) is in the midst of a resurgence, and a prominent focus in the field has been the possible role of iconicity in language learning. Here we critically review theory and empirical findings in this domain. We distinguish local learning enhancement (where the iconicity of certain lexical items influences the learning of those items) and general learning enhancement (where the iconicity of certain lexical items influences the later learning of non-iconic items or systems). We find that evidence for local learning enhancement is quite strong, though not as clear cut as it is often described and based on a limited sample of languages. Despite common claims about broader facilitatory effects of iconicity on learning, we find that current evidence for general learning enhancement is lacking. We suggest a number of productive avenues for future research and specify what types of evidence would be required to show a role for iconicity in general learning enhancement. We also review evidence for functions of iconicity beyond word learning: iconicity enhances comprehension by providing complementary representations, supports communication about sensory imagery, and expresses affective meanings. Even if learning benefits may be modest or cross-linguistically varied, on balance, iconicity emerges as a vital aspect of language.
Azar, Z., Backus, A., & Ozyurek, A. (2019). General and language specific factors influence reference tracking in speech and gesture in discourse. Discourse Processes, 56(7), 553-574. doi:10.1080/0163853X.2018.1519368.
AbstractReferent accessibility influences expressions in speech and gestures in similar ways. Speakers mostly use richer forms as noun phrases (NPs) in speech and gesture more when referents have low accessibility, whereas they use reduced forms such as pronouns more often and gesture less when referents have high accessibility. We investigated the relationships between speech and gesture during reference tracking in a pro-drop language—Turkish. Overt pronouns were not strongly associated with accessibility but with pragmatic context (i.e., marking similarity, contrast). Nevertheless, speakers gestured more when referents were re-introduced versus maintained and when referents were expressed with NPs versus pronouns. Pragmatic context did not influence gestures. Further, pronouns in low-accessibility contexts were accompanied with gestures—possibly for reference disambiguation—more often than previously found for non-pro-drop languages in such contexts. These findings enhance our understanding of the relationships between speech and gesture at the discourse level.
Cuskley, C., Dingemanse, M., Kirby, S., & Van Leeuwen, T. M. (2019). Cross-modal associations and synesthesia: Categorical perception and structure in vowel–color mappings in a large online sample. Behavior Research Methods, 51, 1651-1675. doi:10.3758/s13428-019-01203-7.
AbstractWe report associations between vowel sounds, graphemes, and colours collected online from over 1000 Dutch speakers. We provide open materials including a Python implementation of the structure measure, and code for a single page web application to run simple cross-modal tasks. We also provide a full dataset of colour-vowel associations from 1164 participants, including over 200 synaesthetes identified using consistency measures. Our analysis reveals salient patterns in cross-modal associations, and introduces a novel measure of isomorphism in cross-modal mappings. We find that while acoustic features of vowels significantly predict certain mappings (replicating prior work), both vowel phoneme category and grapheme category are even better predictors of colour choice. Phoneme category is the best predictor of colour choice overall, pointing to the importance of phonological representations in addition to acoustic cues. Generally, high/front vowels are lighter, more green, and more yellow than low/back vowels. Synaesthetes respond more strongly on some dimensions, choosing lighter and more yellow colours for high and mid front vowels than non-synaesthetes. We also present a novel measure of cross-modal mappings adapted from ecology, which uses a simulated distribution of mappings to measure the extent to which participants' actual mappings are structured isomorphically across modalities. Synaesthetes have mappings that tend to be more structured than non-synaesthetes, and more consistent colour choices across trials correlate with higher structure scores. Nevertheless, the large majority (~70%) of participants produce structured mappings, indicating that the capacity to make isomorphically structured mappings across distinct modalities is shared to a large extent, even if the exact nature of mappings varies across individuals. Overall, this novel structure measure suggests a distribution of structured cross-modal association in the population, with synaesthetes on one extreme and participants with unstructured associations on the other.
Drijvers, L., Vaitonyte, J., & Ozyurek, A. (2019). Degree of language experience modulates visual attention to visible speech and iconic gestures during clear and degraded speech comprehension. Cognitive Science, 43: e12789. doi:10.1111/cogs.12789.
AbstractVisual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.
Additional informationSupporting information
Drijvers, L., Van der Plas, M., Ozyurek, A., & Jensen, O. (2019). Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise. NeuroImage, 194, 55-67. doi:10.1016/j.neuroimage.2019.03.032.
AbstractListeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech.
Ortega, G., Schiefner, A., & Ozyurek, A. (2019). Hearing non-signers use their gestures to predict iconic form-meaning mappings at first exposure to sign. Cognition, 191: 103996. doi:10.1016/j.cognition.2019.06.008.
AbstractThe sign languages of deaf communities and the gestures produced by hearing people are communicative systems that exploit the manual-visual modality as means of expression. Despite their striking differences they share the property of iconicity, understood as the direct relationship between a symbol and its referent. Here we investigate whether non-signing hearing adults exploit their implicit knowledge of gestures to bootstrap accurate understanding of the meaning of iconic signs they have never seen before. In Study 1 we show that for some concepts gestures exhibit systematic forms across participants, and share different degrees of form overlap with the signs for the same concepts (full, partial, and no overlap). In Study 2 we found that signs with stronger resemblance with signs are more accurately guessed and are assigned higher iconicity ratings by non-signers than signs with low overlap. In addition, when more people produced a systematic gesture resembling a sign, they assigned higher iconicity ratings to that sign. Furthermore, participants had a bias to assume that signs represent actions and not objects. The similarities between some signs and gestures could be explained by deaf signers and hearing gesturers sharing a conceptual substrate that is rooted in our embodied experiences with the world. The finding that gestural knowledge can ease the interpretation of the meaning of novel signs and predicts iconicity ratings is in line with embodied accounts of cognition and the influence of prior knowledge to acquire new schemas. Through these mechanisms we propose that iconic gestures that overlap in form with signs may serve as some type of ‘manual cognates’ that help non-signing adults to break into a new language at first exposure.
Additional informationSupplementary Materials
Rissman, L., & Majid, A. (2019). Thematic roles: Core knowledge or linguistic construct? Psychonomic Bulletin & Review, 26(6), 1850-1869. doi:10.3758/s13423-019-01634-5.
AbstractThe status of thematic roles such as Agent and Patient in cognitive science is highly controversial: To some they are universal components of core knowledge, to others they are scholarly fictions without psychological reality. We address this debate by posing two critical questions: to what extent do humans represent events in terms of abstract role categories, and to what extent are these categories shaped by universal cognitive biases? We review a range of literature that contributes answers to these questions: psycholinguistic and event cognition experiments with adults, children, and infants; typological studies grounded in cross-linguistic data; and studies of emerging sign languages. We pose these questions for a variety of roles and find that the answers depend on the role. For Agents and Patients, there is strong evidence for abstract role categories and a universal bias to distinguish the two roles. For Goals and Recipients, we find clear evidence for abstraction but mixed evidence as to whether there is a bias to encode Goals and Recipients as part of one or two distinct categories. Finally, we discuss the Instrumental role and do not find clear evidence for either abstraction or universal biases to structure instrumental categories.
Schubotz, L., Ozyurek, A., & Holler, J. (2019). Age-related differences in multimodal recipient design: Younger, but not older adults, adapt speech and co-speech gestures to common ground. Language, Cognition and Neuroscience, 34(2), 254-271. doi:10.1080/23273798.2018.1527377.
AbstractSpeakers can adapt their speech and co-speech gestures based on knowledge shared with an addressee (common ground-based recipient design). Here, we investigate whether these adaptations are modulated by the speaker’s age and cognitive abilities. Younger and older participants narrated six short comic stories to a same-aged addressee. Half of each story was known to both participants, the other half only to the speaker. The two age groups did not differ in terms of the number of words and narrative events mentioned per narration, or in terms of gesture frequency, gesture rate, or percentage of events expressed multimodally. However, only the younger participants reduced the amount of verbal and gestural information when narrating mutually known as opposed to novel story content. Age-related differences in cognitive abilities did not predict these differences in common ground-based recipient design. The older participants’ communicative behaviour may therefore also reflect differences in social or pragmatic goals.
Trujillo, J. P., Vaitonyte, J., Simanova, I., & Ozyurek, A. (2019). Toward the markerless and automatic analysis of kinematic features: A toolkit for gesture and movement research. Behavior Research Methods, 51(2), 769-777. doi:10.3758/s13428-018-1086-8.
AbstractAction, gesture, and sign represent unique aspects of human communication that use form and movement to convey meaning. Researchers typically use manual coding of video data to characterize naturalistic, meaningful movements at various levels of description, but the availability of markerless motion-tracking technology allows for quantification of the kinematic features of gestures or any meaningful human movement. We present a novel protocol for extracting a set of kinematic features from movements recorded with Microsoft Kinect. Our protocol captures spatial and temporal features, such as height, velocity, submovements/strokes, and holds. This approach is based on studies of communicative actions and gestures and attempts to capture features that are consistently implicated as important kinematic aspects of communication. We provide open-source code for the protocol, a description of how the features are calculated, a validation of these features as quantified by our protocol versus manual coders, and a discussion of how the protocol can be applied. The protocol effectively quantifies kinematic features that are important in the production (e.g., characterizing different contexts) as well as the comprehension (e.g., used by addressees to understand intent and semantics) of manual acts. The protocol can also be integrated with qualitative analysis, allowing fast and objective demarcation of movement units, providing accurate coding even of complex movements. This can be useful to clinicians, as well as to researchers studying multimodal communication or human–robot interactions. By making this protocol available, we hope to provide a tool that can be applied to understanding meaningful movement characteristics in human communication.
Van Leeuwen, T. M., Van Petersen, E., Burghoorn, F., Dingemanse, M., & Van Lier, R. (2019). Autistic traits in synaesthesia: Atypical sensory sensitivity and enhanced perception of details. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190024. doi:10.1098/rstb.2019.0024.
AbstractIn synaesthetes specific sensory stimuli (e.g., black letters) elicit additional experiences (e.g. colour). Synaesthesia is highly prevalent among individuals with autism spectrum disorder but the mechanisms of this co-occurrence are not clear. We hypothesized autism and synaesthesia share atypical sensory sensitivity and perception. We assessed autistic traits, sensory sensitivity, and visual perception in two synaesthete populations. In Study 1, synaesthetes (N=79, of different types) scored higher than non-synaesthetes (N=76) on the Attention-to-detail and Social skills subscales of the Autism Spectrum Quotient indexing autistic traits, and on the Glasgow Sensory Questionnaire indexing sensory hypersensitivity and hyposensitivity which frequently occur in autism. Synaesthetes performed two local/global visual tasks because individuals with autism typically show a bias toward detail processing. In synaesthetes, elevated motion coherence thresholds suggested reduced global motion perception and higher accuracy on an embedded figures task suggested enhanced local perception. In Study 2 sequence-space synaesthetes (N=18) completed the same tasks. Questionnaire and embedded figures results qualitatively resembled Study 1 results but no significant group differences with non-synaesthetes (N=20) were obtained. Unexpectedly, sequence-space synaesthetes had reduced motion coherence thresholds. Altogether, our studies suggest atypical sensory sensitivity and a bias towards detail processing are shared features of synaesthesia and autism spectrum disorder.
Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.
AbstractPrevious research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life.
Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.
AbstractIn human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.
Ozyurek, A. (2014). Hearing and seeing meaning in speech and gesture: Insights from brain and behaviour. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 369(1651): 20130296. doi:10.1098/rstb.2013.0296.
AbstractAs we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.
Demir, Ö. E., So, W.-C., Ozyurek, A., & Goldin-Meadow, S. (2012). Turkish- and English-speaking children display sensitivity to perceptual context in referring expressions they produce in speech and gesture. Language and Cognitive Processes, 27, 844 -867. doi:10.1080/01690965.2011.589273.
AbstractSpeakers choose a particular expression based on many factors, including availability of the referent in the perceptual context. We examined whether, when expressing referents, monolingual English- and Turkish-speaking children: (1) are sensitive to perceptual context, (2) express this sensitivity in language-specific ways, and (3) use co-speech gestures to specify referents that are underspecified. We also explored the mechanisms underlying children's sensitivity to perceptual context. Children described short vignettes to an experimenter under two conditions: The characters in the vignettes were present in the perceptual context (perceptual context); the characters were absent (no perceptual context). Children routinely used nouns in the no perceptual context condition, but shifted to pronouns (English-speaking children) or omitted arguments (Turkish-speaking children) in the perceptual context condition. Turkish-speaking children used underspecified referents more frequently than English-speaking children in the perceptual context condition; however, they compensated for the difference by using gesture to specify the forms. Gesture thus gives children learning structurally different languages a way to achieve comparable levels of specification while at the same time adhering to the referential expressions dictated by their language.
Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2012). The communicative influence of gesture and action during speech comprehension: Gestures have the upper hand [Abstract]. Abstracts of the Acoustics 2012 Hong Kong conference published in The Journal of the Acoustical Society of America, 131, 3311. doi:10.1121/1.4708385.
AbstractHand gestures combine with speech to form a single integrated system of meaning during language comprehension (Kelly et al., 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. Thirty-one participants watched videos presenting speech with gestures or manual actions on objects. The relationship between the speech and gesture/action was either complementary (e.g., “He found the answer,” while producing a calculating gesture vs. actually using a calculator) or incongruent (e.g., the same sentence paired with the incongruent gesture/action of stirring with a spoon). Participants watched the video (prime) and then responded to a written word (target) that was or was not spoken in the video prime (e.g., “found” or “cut”). ERPs were taken to the primes (time-locked to the spoken verb, e.g., “found”) and the written targets. For primes, there was a larger frontal N400 (semantic processing) to incongruent vs. congruent items for the gesture, but not action, condition. For targets, the P2 (phonemic processing) was smaller for target words following congruent vs. incongruent gesture, but not action, primes. These findings suggest that hand gestures are integrated with speech in a privileged fashion compared to manual actions on objects.
Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2012). An empirical investigation of expression of multiple entities in Turkish Sign Language (TİD): Considering the effects of modality. Lingua, 122, 1636 -1667. doi:10.1016/j.lingua.2012.08.010.
AbstractThis paper explores the expression of multiple entities in Turkish Sign Language (Türk İşaret Dili; TİD), a less well-studied sign language. It aims to provide a comprehensive description of the ways and frequencies in which entity plurality in this language is expressed, both within and outside the noun phrase. We used a corpus that includes both elicited and spontaneous data from native signers. The results reveal that most of the expressions of multiple entities in TİD are iconic, spatial strategies (i.e. localization and spatial plural predicate inflection) none of which, we argue, should be considered as genuine plural marking devices with the main aim of expressing plurality. Instead, the observed devices for localization and predicate inflection allow for a plural interpretation when multiple locations in space are used. Our data do not provide evidence that TİD employs (productive) morphological plural marking (i.e. reduplication) on nouns, in contrast to some other sign languages and many spoken languages. We relate our findings to expression of multiple entities in other signed languages and in spoken languages and discuss these findings in terms of modality effects on expression of multiple entities in human language.