Sharice Clough

I am a postdoctoral researcher in the Multimodal Language Department at the Max Planck Institute for Psycholinguistics. I completed a master’s in speech-language pathology from the University of Iowa (2018) and a PhD in Hearing and Speech Sciences from Vanderbilt University (2023).

My research examines how adults with acquired brain injury and neurological disorders (e.g., traumatic brain injury, amnesia, Alzheimer's disease, mild cognitive impairment, and aphasia) comprehend and produce multimodal language.  My research combines approaches from neuropsychology, psycholinguistics, and gesture studies to better understand the impact of brain injury on multimodal language use, to test the cognitive and neural resources that support co-speech gesture production and comprehension, and to identify opportunities to leverage multimodality to support cognition and communication after brain injury.

Broadly, my research focuses on two main questions:

How do cognitive and neural differences after brain injury impact multimodal language comprehension, integration, and learning? 

Using behavioral, eye-tracking, and virtual-reality paradigms, I study how individuals with focal or diffuse brain damage process and integrate information across speech and gesture (both in the moment and over time). My prior work shows that people with hippocampal amnesia who have profound memory impairments (Hilverman, et al., 2018) and adults with traumatic brain injury (TBI) (Clough et al., 2023) can successfully integrate information from speech and gesture in their retelling of stories. However, individuals with TBI were less able to use information from gesture to predict upcoming information in speech than non-injured peers (Clough et al., 2024) and showed weaker gains in learning through the use of gesture over time (Cho et al., 2024). These findings suggest that individuals with memory and other cognitive impairments may be able to benefit from gesture to improve comprehension and learning but the rate and efficiency of learning may differ from non-injured peers. I am continuing this line of research in two ongoing research projects examining: 

  • Whether iconic gestures speed up verbal response times to questions produced by a virtual agent in a 3D virtual reality environment and whether these effects differ by age or question type
  • Whether observing gesture during encoding can improve narrative recall in older adults with mild cognitive impairment or Alzheimer’s disease
     

How does memory impairment impact flexible, adaptive communication across modalities? 

This research line examines how speakers adjust multimodal communication in response to cognitive demands, social context, and listener needs. I investigate how gesture use changes after brain injury, particularly in populations with memory disorders. For example, individuals with hippocampal amnesia adapt their gestures, but not their speech, when communicating with adult versus child listeners (Clough et al., 2022). Similarly, adults with TBI show modality-specific adaptations in group conversations (Clough et al., in prep): they under-inform in speech but produce informative gestures even for knowledgeable listeners. Motion-tracking analyses reveal that both injured and non-injured speakers adjust gesture kinematics (e.g., size, height, holds, submovements) when addressing mixed-knowledge audiences, highlighting both communicative and cognitive functions of gesture in speakers with TBI. In ongoing work, I am conducting kinematic analyses of gesture production in populations with adult neurogenic communication disorders examining:

  • Differences in how healthy older adults, individuals with mild cognitive impairment, and individuals with Alzheimer’s disease use physical space when describing familiar spatial layouts
  • Whether the form of gestures produced by individuals with mild cognitive impairment or Alzheimer’s disease reflect gestures they have previously observed
  • How people with aphasia and their communication partners align in gesture form to support mutual understanding and communicative efficiency in interaction (with Martina Mellana)
     

Collectively, these research lines seek to understand under what conditions individuals with brain injury are successful (or unsuccessful) at processing multimodal language and using gesture to support communication and learning. The goal of this research is to increase the sensitivity and ecological validity of language assessment and improve functional communication outcomes.

For more information about my research and scholarly activities, please see my cv.

Share this page