Ezgi Mamus defends thesis 13 May
We experience the world through different senses: we see, hear, smell, touch, and taste things. Each of these senses offers unique information but also certain limitations. Together, these determine how we understand objects and events, and thus concepts. For example, while a car passes by, we observe that it is a fast-moving sports car and also hear the whooshing noise the car makes. The visual and auditory cues together inform us about the speed of the car. But, what happens when one of these cues is absent, as in the experience of individuals who are blind from birth? Does this affect the wayindividuals describe this event?
When we communicate about our experiences, we use different communicational means, such as words, hand gestures, and facial expressions. As with simple sensory experiences, each communication means has its own benefits and restrictions. For example, gesture can provide precise information of how an object moves whereas speech may not have the correct word in its vocabulary. In the current thesis, Mamus investigates to what extent perceptual experience influences multimodal language use in speech and gesture as well as the underlying conceptual knowledge that gives rise to these visible behaviors.
Share this page