Language is not restricted to speech only, but can be also expressed and perceived through the visual modality, as is the case in sign languages, the natural languages of deaf communities. The visual modality can give rise to modality-specific structures. For example, sign languages, unlike speech, can express multiple semantic elements through the use of iconicity, i.e., motivated form-meaning mappings, and simultaneity, i.e., multiple body articulators (hands, torso, head, facial expression and eye gaze) at the same time. However, the role such properties play in communicative efficiency—a fundamental feature that is known to shape language structure—has not been systematically explored in sign languages.
This thesis investigates whether and how iconicity and simultaneity are recruited for communicative efficiency in LIS (Italian Sign Language) and whether they evolve in a linguistic system to be optimized for this function. This inquiry is addressed by means of 3 experimental studies, each tackling different but complementary perspectives regarding the role of iconicity and simultaneity in: information organization, linguistic encoding strategies and language evolution. By bringing together the research fields of sign languages, communicative efficiency and language evolution, this thesis shows how linguistic structure in LIS adapts to fit communicative efficiency pressures and highlights the role that linguistic modality plays in how it is achieved.