17809
Recognizing Posed and Evoked Facial Expressions from Adults with Autism Spectrum Disorder

Thursday, May 15, 2014
Atrium Ballroom (Marriott Marquis Atlanta)
D. J. Faso1, N. J. Sasson2 and A. Pinkham3, (1)University of Texas at Dallas, Richardson, TX, (2)School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX, (3)Southern Methodist University, Dallas, TX
Background: Successful social interaction requires both effective perception and expression of emotion. Although a large literature has demonstrated that social interaction deficits in Autism Spectrum Disorder (ASD) may result from impairments in emotion perception, comparably little research has assessed whether expression of emotion in ASD is less effectively interpreted by social others. The few prior studies investigating this question have assessed cued or posed expressions, which may fail to capture how facial expressivity naturalistically occurs when experiencing an emotional event.  

Objectives: This study aimed to determine if posed and evoked facial expressions of emotion produced by adults with ASD are perceived differently by naïve observers compared to those produced by typically developing (TD) controls. Observers were predicted to 1) be less accurate in recognizing expressions from ASD adults (Emotion Recognition Accuracy; ERA), 2) rate ASD expressions as less intense, demonstrating  ‘flat affect’, and 3) less natural. Further, the factors informative for correct identifications of emotional expressions were predicted to differ between the ASD and TD groups.   

Methods: Static facial photographs of high-functioning adults with ASD (N=6) and typically developing (TD) comparison adults (N=6) were captured expressing five emotions (happy, sad, anger, fear, neutral) across varying intensities within both a posed and evoked condition. In the posed condition, participants produced expressions cued by the researcher. In the evoked condition, participants were coached to relive emotional past experiences while their naturally elicited facial expressions were captured. Participants provided ratings of their subjective experience of emotion during the procedure. These ratings were higher in the evoked condition than the posed condition, validating the methodological efficacy of producing actual ‘felt’ emotions, but importantly did not differ between the groups (F(1,8) =.596, p=.462, η2 =.069). In the second stage of the experiment, naïve observers (N=38) identified the expressed emotion in each photo, and rated the intensity and naturalness of the expression.  

Results: Contrary to hypotheses, ERA was significantly higher for the ASD group than the TD group (F(1,37)=4.557, p=.039, η2 =.110). However, ASD expressions were rated as significantly more intense (F(1,37)=77.33, p<.001, η2 =.676) and less natural (F(1,37)=118.703, p<.001, η2 =.762) than TD expressions. Intensity of expressions was correlated with ERA for both groups (r=.55, p<.001), and thus the significantly higher intensity ratings for the ASD group may have contributed to their higher ERA. Further, naturalness ratings in the evoked condition were related to ERA for the TD group (r=.32, p<.05) but not the ASD group (r=-.03, n.s.), suggesting that perceived naturalness facilitated ERA selectively for the TD group.    

Conclusions: Higher ERA and intensity ratings for the ASD group compared to the TD group are inconsistent with notions of “flat affect” in ASD. Whether these unexpected results are specific to the sample under study here (high-functioning adults) is unclear. However, the higher intensity and naturalness ratings for the ASD group suggest patterns of atypical facial expressivity that may relate to broader social impairments. These findings persisted in the evoked condition suggesting that facial expressivity abnormalities in ASD extend to real world contexts.