Generalisable patterns of gesture distinguish semantic categories in communication without language: Evidence from pantomime
There is a long-standing assumption that gestural forms are geared by a set of modes of representation
(acting, representing, drawing, moulding) with each technique expressing speakers’ focus of attention on
specific aspects of a referent (Müller, 2013). However, it is just recently that the relationship between
gestural forms and mode of representation has been linked to 1) the semantic categories they represent (i.e.,
objects, actions) and 2) the affordances of the referents. Here we investigate these relations when speakers
are asked to communicate about different types of referents in pantomime. This mode of communication has
revealed generalisable ordering of constituents of events across speakers of different languages (Goldin-
Meadow, So, Özyürek, & Mylander, 2008) but it remains an empirical question whether it also draws on
systematic patterns to distinguish different semantic categories.
Twenty speakers of Dutch participated in a pantomime generation task. They had to produce a gesture that
conveyed the same meaning as a word on a computer screen without speaking. Participants saw 10 words
from three semantic categories: actions with objects (e.g., to drink), manipulable objects (e.g., mug), and
non-manipulable objects (e.g., building). Pantomimes were categorised according to their mode of
representation and also the use of deictics (pointing, showing or eye gaze). Further, ordering of different
representations were noted when there were more than one gesture produced.
Actions with objects elicited mainly individual gestures (mean: 1.1, range: 1-2), while manipulable objects
(mean: 1.8, range: 1-4) and non-manipulable objects (mean: 1.6, range: 1-4) elicited primarily more than one
pantomime as sequences of interrelated gestures. Actions with objects were mostly represented with one
gesture, and through re-enactment of the action (e.g., raising a closed fist to the mouth for ‘to drink’) while
manipulable objects mostly were represented through an acting gesture followed by a deictic (e.g., raising a
closed fist to the mouth and then pointing at the fist). Non-manipulable objects, however, were represented
through a drawing gesture followed by an acting one (e.g., tracing a rectangle and then pretending to walk
through a door).
In the absence of language the form of gestures is constrained by objects’ affordances (i.e., manipulable or
not) and the communicative need to discriminate across semantic categories (i.e., objects or action).
Gestures adopt an acting or drawing mode of representation depending on the affordances of the referent;
which echoes patterns observed in the forms of co-speech gestures (Masson-Carro, Goudbeek, & Krahmer,
2015). We also show for the first time that use and ordering of deictics and the different modes of
representation operate in tandem to distinguish between semantically related concepts (e.g., to drink and
mug). When forced to communicate without language, participants show consistent patterns in their
strategies to distinguish different semantic categories
Publication type
TalkPublication date
2016
Share this page