Multimodal detection and classification of head movements in face-to-face
conversations: Exploring models, features and their interaction
Agirrezabal, M., Paggio, P., Navarretta, C., & Jongejan, B.
Multimodal detection and classification of head movements in face-to-face conversations: Exploring models, features and their interaction. In W. Pouw, J. Trujillo, H. R. Bosker, L. Drijvers, M. Hoetjes, J. Holler, S. Kadava, L. Van Maastricht, E. Mamus, & A. Ozyurek (Eds.
), Gesture and Speech in Interaction (GeSpIn) Conference
In this work we perform multimodal detection and classification
of head movements from face to face video conversation data.
We have experimented with different models and feature sets
and provided some insight on the effect of independent features,
but also how their interaction can enhance a head movement
classifier. Used features include nose, neck and mid hip position
coordinates and their derivatives together with acoustic features,
namely, intensity and pitch of the speaker on focus. Results
show that when input features are sufficiently processed by in-
teracting with each other, a linear classifier can reach a similar
performance to a more complex non-linear neural model with
several hidden layers. Our best models achieve state-of-the-art
performance in the detection task, measured by macro-averaged