Languages can be expressed and perceived not only through speech or written text but also through visible body expressions (hands, body, and face). All spoken languages use gestures along with speech, and in deaf communities all aspects of language can be expressed through the visible body in sign language. However, the unique contribution of such visible expressions to our understanding of the human language faculty is less understood. The Multimodal Language Department aims to understand how visual features of language, along with speech or in sign languages, constitute a fundamental aspect of the human language capacity, contributing to its unique flexible and adaptive nature. The ambition of the department is to conventionalise the view of language and linguistics as multimodal phenomena.
To this end, we conduct fieldwork on how gestures are used in spoken languages with different linguistic structures - such as word order or prosody - as well as in different sign languages, to understand universal and diverse patterns. The Multimodal Language Department also aims to understand the role of neural, cognitive and linguistic processing mechanisms, requirements of language use in interaction and language transmission (for instance learning constraints) in shaping multimodal structures of language. The general aim therefore is to unravel the cognitive and social foundations of the human ability for language, by considering its multimodal and crosslinguistic diversity as a fundamental design feature.
Our researchers combine multiple methods, such as corpus and computational linguistics, experimental methods, machine learning, AI, and virtual reality, to investigate multimodal language structure, use, processing and transmission. We work with a variety of language users of different signed and spoken languages around the world, as well as with individuals who have different access to sensory experience, such as deaf and blind language users, people in different age groups, and people with autism spectrum disorder.