Multimodal Language and Cognition

Multimodal Language and Cognition research group

 

Scientific Mission

Our research group’s mission is to understand the complex architecture of our human language faculty by looking beyond the spoken or written modalities through which language can be expressed, that is by also taking into account the visible bodily aspects of language and communication and how they are used in embodied and situated contexts.  

We investigate the ways language is expressed through multiple modalities, such as visual (e.g., gestures used by hearing communities and sign languages created by Deaf Communities) and auditory (speech) modalities, as a novel window to understand what they reveal about our language faculty and its neural, cognitive and social foundations. 

We particularly focus on how multimodal approach to language enhances our understanding of

  • Processing: Cognitive and neural substrate of language and its processing mechanisms (e.g., prediction, lexical access, integration, compositionality, link to action, perception and memory).
  • Interaction: How language is used to manage social and interactive coordination such as in dialogue.
  • Evolution and Emergence: How language evolved, adapts, and can emerge a-new when language is not accessible.
  • Learning mechanisms: How language (spoken or signed) can be learned by children (as early and late L1) and adults (as L2).

 

Please visit our other webpage for particular projects under these themes and the Nijmegen Gesture Center for more updates.

 
Approach

We adopt a cross-linguistic and cross-cultural approach and conduct our studies in diverse spoken and signed languages to understand which aspects of multimodal language and cognition are universally shared (resilient) and which ones are prone to diversity and influence by contextual input (adaptive).

We primarily focus on semantic and cognitive domains of space, event, and action where bodily expressions might be best suited to reveal the interface between language, cognition and interaction.

We use multidisciplinary methods ranging from analyses of naturalistic multimodal interactions within and across cultures to controlled psycholinguistic and brain imaging experiments. 

 

 

 

Social Impact

We are committed to disseminating our research results to the public (educators, speech and language pathologists, audiologists, policy makers) in order to improve the language, communication, and cognition of populations who do not have ”full” access to speech, language, or communicative abilities (such as deaf children who do not have early access to language, the hearing impaired and aging populations, non-native users of language, and autistic individuals).

Advancing Human-Technology Relations

We aspire to:

Develop new technologies that will advance the study of multimodal communication using automated gesture and speech recognition techniques (Kinect, AI, Deep Machine Learning). Enhance humans’ adaptation to interacting with new technologies (virtual and augmented realities, avatars. and robots) using multimodal modes of communication.

Contact

Asli Ozyurek

Research Associate
Multimodal Language and Cognition
+31 24 3521304

Share this page