20026
Facial Emotion Recognition from Videos with Varying Intensity in Autism

Friday, May 15, 2015: 5:30 PM-7:00 PM
Imperial Ballroom (Grand America Hotel)
T. S. Wingenbach1, C. Ashwin2 and M. Brosnan3, (1)Department of Psychology, University of Bath, Bath, United Kingdom, (2)Dept. of Psychology, University of Bath, Bath, United Kingdom, (3)University of Bath, Bath, United Kingdom
Background : Autism Spectrum Disorders (ASD) are defined by impairments in social communication and interaction, including non-verbal communication such as emotional expressions. However, behavioural studies comparing ASD to controls on facial emotion recognition have shown mixed results; some show group differences, some do not. This might be attributed to the differences in methodology applied, with many investigations including only certain basic emotions, based on static displays, and small numbers of trials and facial stimuli. Complex emotions are experienced on a daily basis and are important to correctly recognise for effectively functioning within social interactions, as are subtle displays of emotional expressions. Moreover, reports are generally based on raw hit rates, not taking into account both correct and incorrect responses. For example, choosing surprise for both fear and surprise expressions, which can bias recognition rates. To date, no published work exists on facial emotion recognition including both subtle emotional expressions based on dynamic video recordings, and including basic and complex emotions.

Objectives : To identify whether people with ASD differ from controls in facial emotion recognition using videos including six basic and three complex emotions across three intensity levels (low – intermediate – high). It was expected those with ASD would show reduced accuracy from videos, especially at high intensity due to the diminished ability to use the additional emotional information appropriately that is available for more intense expressions.

Methods : Twelve adolescents and adults with a current diagnosis of ASD (9 male; Mean age = 16.92, SD = .29) and 12 matched controls (9 male; Mean age = 17.25, SD= .75) completed 360 trials of a facial emotion recognition task (9 emotions: anger, disgust, fear, happiness, sadness, surprise, contempt, embarrassment, pride x 3 intensities x 12 faces). Unbiased hit rates were the DV, to take into account both correct and incorrect responses to the emotion categories, and entered in a repeated measures ANOVA.

Results : Overall, the ASD group had reduced accuracy compared to controls on the facial emotion recognition task. Both groups showed significantly higher accuracy for the high intensity than the intermediate intensity expressions, as so for the latter compared to the low intensity expressions. Significant group differences were found for the intermediate and high, but not low intensity expressions. Although there were no group differences for specific emotions, group differences emerged for the three-way interaction; level of intensity impacted differently upon specific emotions of both basic and complex emotions.

Conclusions : A facial emotion recognition deficit in ASD was identified, but the level of intensity at which group differences occurred varied between emotion categories. Both groups benefitted from the additional emotional information in intermediate and high intensity expressions compared to low intensity expressions, as shown by increased accuracy. Despite this increase and the comparable performance at low intensity, the controls benefitted from the additional emotional information at intermediate intensity more than the ASD group. This suggests the control group showed superior ability to utilise the greater emotional information available in intensities than the ASD group, over both basic and complex emotions.