27411
Pilot Evaluation of an Adaptive Virtual Reality Based Social Intervention for Teens with ASD

Poster Presentation
Friday, May 11, 2018: 11:30 AM-1:30 PM
Hall Grote Zaal (de Doelen ICC Rotterdam)
J. W. Wade1, E. Bekele1, D. Bian2, A. Swanson3, A. S. Weitlauf4, Z. Warren5 and N. Sarkar1, (1)Vanderbilt University, Nashville, TN, (2)Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN, (3)Vanderbilt Kennedy Center, Vanderbilt University Medical Center, NASHVILLE, TN, (4)Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, (5)Vanderbilt University Medical Center, Nashville, TN
Background: Deficiency in social communication is a core characteristic of individuals with Autism Spectrum Disorder (ASD). Interventions designed to address these deficits, especially those based on innovative technology, show evidence of outcome improvement. However, currently available systems do not fully leverage the ability of technology to utilize sensor-based biofeedback in a manner that is adaptive and individually tailored. Atypical patterns of gaze in individuals with ASD are related to processing and cannot be directly addressed through exclusively performance-based systems. Thus, there is a need to develop and evaluate social intervention technologies capable of utilizing such biofeedback in real time to enhance current approaches.

Objectives: We created and pilot tested a novel social intervention technology called Multimodal Adaptive Social Interaction in Virtual Environments (MASI-VR). In this system, conversations between users and avatars are mediated by a virtual facilitator who gives feedback designed to guide users towards ideal conversational exchanges (i.e., those that are reciprocal and on topic). We hypothesized that MASI-VR would lead to improved emotion recognition accuracy and that group differences would emerge with regards to visual attention.

Methods: MASI-VR consists of a 3D virtual high school cafeteria populated by animated avatars, with whom users verbally communicate using a built-in speech recognition module. Two modes of MASI-VR were developed and tested: one in which task progression depends solely on performance in conversational tasks, and another in which task progression depends on both performance and attention directed towards the emotionally expressive elements of avatars' faces. N=18 teenagers (M=15.24 years of age, SD=1.68) with a clinically verified diagnosis of ASD took part in an IRB-approved study to evaluate the novel system. Subjects were randomly assigned to groups using one of the two modes described above (9 in each group). Training consisted of 30 minutes of exposure to MASI-VR at three different time points. Changes in performance were assessed at two time points (pretest and posttest) using a novel system designed to quantify users' abilities to recognize the seven universally accepted emotions described by Ekman (1993) as presented on animated faces (i.e., fear, joy, surprise, anger, sadness, disgust, and contempt).

Results: All but one participant completed the study. Performance on the emotion recognition task increased significantly in both the performance-based (p < .05; 12.05% increase) and gaze-sensitive (p < .05; 12.95% increase) groups. With regards to gaze measures, blink rate decreased significantly from pretest to posttest (p < .05), and durations of fixation on the mouth decreased while durations of fixation on the forehead and entire face increased (all p < .05).

Conclusions: We implemented and pilot tested a novel system for social intervention in teens with ASD. Training with this system demonstrated significant improvements in social skills as measured by accuracy in emotion recognition. Additionally, we found significant changes in patterns of visual attention related to processing of the emotional faces. These results show promise for the use of an adaptive technology in training social skills and warrant further investigation into the use of our novel system.