32327
A Virtual Reality-Based Interactive System with an Assistive Avatar to Influence Visual Attention in Children with ASD

Poster Presentation
Friday, May 3, 2019: 10:00 AM-1:30 PM
Room: 710 (Palais des congres de Montreal)
A. Amat1, A. S. Weitlauf2, A. Swanson3, N. Sarkar4 and Z. Warren5, (1)Electrical Engineering, Vanderbilt University, Nashville, TN, (2)Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, (3)Vanderbilt Kennedy Center, Vanderbilt University Medical Center, NASHVILLE, TN, (4)Adaptive Technology Consulting, Murfreesboro, TN, (5)Vanderbilt University Medical Center, Nashville, TN
Background:

Individuals with ASD often spend less time looking at another person’s facial features (particularly the eye region) compared to non-facial areas. Reduced eye gaze during social interaction negatively impacts social skills development, facial expression processing and information sharing. Virtual reality platforms, which can present social games while tracking performance and recording continuous quantitative measures, may offer a promising avenue for eye gaze detection and training.

Objectives:

This study developed and implemented an assistive avatar system that autonomously tracks and responds to participants’ gaze patterns in a game-like setting to influence visual attention to eyes. The system, which uses a virtual avatar with head and eye animation control, provides a closed loop interaction, gives users performance feedback, adaptively changes task difficulty level, and tracks eye gaze in real-time.. System objectives were to 1) improve ability to follow the avatar’s gaze, 2) improve ability to identify gaze cues in social tasks settings, and 3) study the effect of gaze sharing and gaze following on emotion recognition.

Methods:

20 participants (10 ASD, 10 TD) ranging from 7 to 12 years of age completed three visits (pre- and post-tests and 33 training tasks). Participants provided input to the system using a Tobii eye tracker which monitored their gaze shifts and told the system to respond accordingly. Training tasks consisted of a virtual avatar surrounded by grayed out puzzle pieces, each of which corresponded to a colored target image at the bottom of the screen. The avatar’s eye gaze shifted to the target piece, requiring the participant to follow the avatar’s gaze to select a piece and finish the puzzle,

The game was developed in Unity v5, using finite state machines to model the system (game-, avatar-, and puzzle- states). Avatars were developed using Maya Autodesk and displayed 7 distinct gaze directions (positions on screen) at 3 different levels: head/eyes moving together, moderate eye movement only, minimal eye movement. Adaptive difficulty levels added challenge as skills improved (e.g., speed of avatar’s gaze at object; time for participant to respond). The pre- and post-test was a bubble popping game using the same game architecture as the training tasks.

Results:

Preliminary analyses on eye gaze data between pre- and post-test gaze tasks showed significant improvement in performance and gaze following skills in participants with ASD after completing the training games (improvement in time to complete games, response time to avatar’s gaze cues, gaze fixation on eye region). No significant improvements were observed in the TD group. Data from training games will be analyzed to obtain the gaze fixation patterns and duration. Results of analysis for the ASD vs TD groups will be compared to evaluate differences in visual interaction processing.

Conclusions:

We developed a virtual-reality based assistive avatar embedded in an interactive system that monitors participant eye gaze to assess visual attention as part of social tasks. Pilot results indicate that after completing training sessions, children with ASD show improvement in joint attention skills.