Our research group’s mission is to understand the complex architecture of our human language faculty by looking beyond the spoken or written modalities through which language can be expressed, that is by also taking into account the visible bodily aspects of language and communication and how they are used in embodied and situated contexts.
We investigate the ways language is expressed through multiple modalities, such as visual (e.g., gestures used by hearing communities and sign languages created by Deaf Communities) and auditory (speech) modalities, as a novel window to understand what they reveal about our language faculty and its neural, cognitive and social foundations.
We particularly focus on how multimodal approach to language enhances our understanding of
Please visit our other webpage for particular projects under these themes.
We adopt a cross-linguistic and cross-cultural approach and conduct our studies in diverse spoken and signed languages to understand which aspects of multimodal language and cognition are universally shared (resilient) and which ones are prone to diversity and influence by contextual input (adaptive).
We primarily focus on semantic and cognitive domains of space, event, and action where bodily expressions might be best suited to reveal the interface between language, cognition and interaction.
We use multidisciplinary methods ranging from analyses of naturalistic multimodal interactions within and across cultures to controlled psycholinguistic and brain imaging experiments.
We are committed to disseminating our research results to the public (educators, speech and language pathologists, audiologists, policy makers) in order to improve the language, communication, and cognition of populations who do not have ”full” access to speech, language, or communicative abilities (such as deaf children who do not have early access to language, the hearing impaired and aging populations, non-native users of language, and autistic individuals).
We aspire to:
Develop new technologies that will advance the study of multimodal communication using automated gesture and speech recognition techniques (Kinect, AI, Deep Machine Learning). Enhance humans’ adaptation to interacting with new technologies (virtual and augmented realities, avatars. and robots) using multimodal modes of communication.