32309
Are Avatars Just like Humans? Comparing Autism Spectrum Disorder Individuals’ Emotion Recognition of Virtual Avatar and Human Faces

Poster Presentation
Thursday, May 2, 2019: 5:30 PM-7:00 PM
Room: 710 (Palais des congres de Montreal)
E. Amico1, A. Swanson2, J. W. Wade3, A. S. Weitlauf1 and Z. Warren4, (1)Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, (2)Vanderbilt Kennedy Center, Vanderbilt University Medical Center, NASHVILLE, TN, (3)Vanderbilt University, Nashville, TN, (4)Vanderbilt University Medical Center, Nashville, TN
Background:

Computer-based interventions, as a more economical and accessible option, are commonly used to teach much-needed skills to ASD individuals (Bekele et al., 2014). Virtual reality (VR) has been used to provide a realistic setting with interactive avatars, allowing ASD individuals controlled opportunities to practice and potentially improve their social interaction skills in a stable learning environment (Ramdoss et al., 2011). Avatars have aided in teaching emotion recognition, allowing ASD individuals to practice social interaction without the anxiety commonly associated with real-world interactions (Parsons, 2000; Golan & Baron-Cohen, 2006) and offer unique benefits such as their ability to be controlled and animated (Dyck et al., 2010), as well as customization of features such as physical appearance, emotion intensity, and gaze (Joyal, 2014). Despite the benefits of their use, we lack data directly comparing emotion recognition tasks utilizing avatars compared to humans in individuals with ASD.

Objectives:

The current project aims to provide a controlled, empirical assessment of the validity of virtual avatars with respect to emotion recognition intervention tasks for individuals with ASD.

Methods:

Virtual avatars were customized and rigged using Mixamo and Autodesk Maya then imported into Unity for final use in a virtual classroom environment. Avatars dynamically displayed the 7 universal emotions (anger, fear, disgust, happiness, sadness, surprise and contempt) at 3 levels of intensity, each display moving from a neutral expression to the emotion in 2 seconds. Twenty-two typically developing adolescents validated these prior to the study. Post validation, the avatar presentation of emotions was matched to human faces, also dynamically displaying 7 emotions at 3 levels of intensity, from neutral expression to emotion in 2 seconds. Human faces were selected from the Amsterdam Dynamic Facial Expression Set - Bath Intensity Variations (ADFES-BIV) database (Wingenbach et al., 2016). All participants selected the emotion displayed by human or avatar and provided a confidence rating for each response. Scores on the tasks were first compared to evaluate performance differences between human vs. virtual presentations as well as group differences.

Results:

There was no significant difference (t(24)= -1.796, p= 0.085) between the overall scores for emotions presented by the avatar (M= 14.56, SD= 1.92) compared to human (M= 15.56, SD= 2.48). Certain emotions had significant differences between the avatar and human tasks (Figure 6). “Surprise” and “Contempt” were easier to identify when presented by human faces (surprise: M= 2.80, SD= 0.50; contempt: M= 1.56, SD= 1.04) compared to avatar faces (surprise: M= 2.04, SD= 0.61; contempt: M= 0.72, SD= 0.89), (surprise: t(24)= -5.729, p < 0.001; contempt: t(24)= -3.280, p= .0003). “Anger” however was more correctly identified in avatar faces (M= 2.68, SD= 0.48) compared to human faces (M= 2.12, SD= 0.88), (t(24)= 2.682, p= 0.013). There was no main effect for group (ASD vs. TD) on identifying emotions presented by avatars or humans.

Conclusions:

Virtual avatars may serve as an accurate and realistic comparison to human faces, making them suitable for measuring and teaching emotion recognition in ASD with promise for effective use in therapeutic interventions.