Objectives: In the present study, we developed a system that seamlessly integrates VR technology with dynamic eye-tracking to create virtual gaze-sensitive social communication scenarios capable of delivering individualized feedback. Subsequently, we studied the implications of such individualized feedback for adolescents with ASD.
Methods: Six adolescents with ASD (age 13-18 years) participated in this study. VR-based social communication tasks (Trial1-5) were designed to project human characters (avatars or, virtual classmates) telling personal stories with context-relevant objects in the background. The participants were asked to try to make their virtual classmates as comfortable as possible while listening to their presentation. However, it was not explicitly stated that in a presentation a speaker feels good when the audience pay attention to him/her (by looking towards the speaker). The idea here was to give indirect feedback to the participants about their viewing patterns and thereby study how that affects them as the task proceeded. The presented visual stimulus was segmented into Regions of Interest (ROIs). We computed real-time behavioral viewing indices (fixation duration (FD) and object-to-face ratio (OFR)) of the participants. At the end of each trial, the participant was asked a story-related question. Based on the participant’s response and the percentage of FD on the avatar’s face (Face_ROI), our system generated individualized feedback. We examined the impact of the individualized feedback on their viewing indices.
Results: The participants’ behavioral viewing indices, such as FD towards the Face_ROI, and OFR showed improvement across trials. Specifically, all participants demonstrated improvements in terms of time spent looking at the Face_ROI from first to fifth trial (i.e., end of four trials with individualized feedback), with a dependent samples t-test indicating a statistically significant change for the groups. Also, all participants demonstrated an improvement in OFR from the first to fifth trial with statistically significant change in OFR for the groups.
Conclusions: The results show the feasibility of a VR-based gaze-sensitive system to provide individualized feedback on one’s behavioral viewing in an on-line, continuous manner. Investigation indicates that such feedback can contribute to improvement in one’s behavioral viewing patterns during social communication. Such capability suggests that this developed technology could be integrated into a more complex and sophisticated social interaction task to achieve targeted goals if paired with appropriate reinforcement paradigms.