Capture the Face: Motion-Capture Patterns of Dynamic Facial Expressions in ASD

Poster Presentation
Thursday, May 10, 2018: 11:30 AM-1:30 PM
Hall Grote Zaal (de Doelen ICC Rotterdam)
E. Zane1, L. Pozzan2, Z. Yang3, T. Guha3, S. Narayanan3 and R. Grossman4, (1)FACE Lab, Emerson College, FACE Lab, Boston, MA, (2)Amazon, Cambridge, MA, (3)University of Southern California, Los Angeles, CA, (4)CSD, Emerson College, Boston, MA

Neurotypical (NT) individuals struggle to interpret emotional facial expressions of people with Autism Spectrum Disorder (ASD) (Brewer et al., 2016). Research attempting to determine why this is so has reported different findings. Some describe expressions as atypically flat in ASD (Kasari et al., 1993; Stagg et al., 2014; Yirmiya et al., 1989), while others report expressions that are less “natural” than NT individuals (Faso et al., 2015; Grossman et al., 2013).

Much of this research uses human observation to assess facial-expression quality in ASD. However, this type of subjective coding cannot elucidate underlying differences in facial movement that might lead to facial-expression ambiguity.


Use facial motion-capture (mocap) to objectively quantify the facial movements that make expressions of individuals with ASD difficult for NT individuals to interpret


We presented 18 videos of actors making emotional facial expressions to 19 children and adolescents with ASD and 18 NT individuals (Age M = 12;8 and 12;11, respectively). Groups did not differ significantly on age, gender, IQ and language. We asked participants to mimic facial expressions they saw while wearing 32 reflective markers on their face. The movement of markers was recorded using mocap technology.

Mocap data were grouped by the Valence (positive vs. negative) and Intensity (high vs. low) of the expression being mimicked. We used Growth Curve Analysis (GCA) (Mirman, 2008; 2014) to test whether facial movement patterns were predictable by the valence and intensity of the expressed emotion and by participant group.


Facial movement patterns were significantly predicted by an interaction between group (ASD vs. TD), the Valence of the expression being mimicked (Positive vs. Negative), and the Intensity of expression (High vs. Low). This result reflects differences in facial expressions between the diagnostic groups, which vary as a function of the Intensity and Valence of the expression. Post-hoc tests within group show that Valence significantly predicts movement in the NT group, but not the ASD group. Both group’s movement patterns are predicted by Intensity.


For NT individuals, Intensity and Valence significantly predict facial expressions movements. For individuals with ASD, facial movement is predicted by Intensity, but not Valence. This demonstrates a lack of quantifiable differentiation between positive vs. negative expressions in the ASD group, which could provide an objective explanation for why NT individuals struggle to interpret the facial expressions of their ASD peers.