26082
Using Positional Tracking to Monitor Gaze in VR - Pilot Study

Friday, May 12, 2017: 10:00 AM-1:40 PM
Golden Gate Ballroom (Marriott Marquis Hotel)
L. A. Hart1, R. A. Oien2, E. Velasquez3, Q. Wang4, M. Mademtzi4, F. R. Volkmar5 and F. Shic6, (1)Yale School of Medicine, New Haven, CT, (2)The Arctic University of Norway, Tromso, NORWAY, (3)Full Sail University, Orange County, FL, (4)Child Study Center, Yale University School of Medicine, New Haven, CT, (5)Child Study Center, Yale School of Medicine, New Haven, CT, (6)Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA
Background: The use of video as a tool to facilitate learning and skill development has undergone numerous iterations within this field and others, and its application has expanded to include eye-tracking and assessment to name only a few examples. More recently, 360 degree video has begun to enter the realm of mainstream media, in part as an accompaniment to the advent of virtual reality headsets (HMDs) as an affordable consumer product. In addition to their potential benefits to ecological validity (Cummings, Bailenson, & Fidler, 2012), the sophisticated positional tracking systems possessed by most commercial HMDs represents an opportunity to leverage this feature as an alternative measure of visual attention within VR environments. Evidence from both eye-tracking (Wang et al., 2015) and earlier work with HMDs (Jarrold et al., 2013) suggests that visual attention can be beneficially altered through interactive training and prompts. While currently limited to positional tracking, HMDs’ stereoscopic 360° environments and real-time positional data makes them a viable and affordable candidate platform for interventions and assessments that would benefit from the ability to track visual attention.

Objectives: The first phase was conducted to 1) assess participants’ level of comfort during extended use of HMDs, and 2) evaluate gaze as a measure of visual attention. Phase 2’s primary goal: use these findings to inform the design of a background VR environment capable of 1) automatic data recording, 2) displaying playlists of 360° videos, and 3) providing a framework for custom stimuli setup.

Methods:  Phase 1 consisted of 4 participants with a neurodevelopmental disorder(s) including ASD - both verbal and nonverbal (ages 6 to 18), and 2 TD participants (ages 5 to 11). Each participant watched five 360° videos varied across dimensions of social and physical intensity on an oculus DK2. Visual attention was defined as the central 25°s of the visible 100° FoV (Field of View) in the DK2. Questions and observations concerning comfort and overall enjoyment were recorded by a confederate. For phase 2, our initial approach was translated and modified within Unity, a popular game development platform.

Results:  On average, participants spent 14 minutes and 21 seconds wearing the DK2, and responded positively during the session and to questions post-viewing. Notably, none of the participants with ASD paused or removed the headset, and only one TD participant briefly removed their headset during the “physical intense” 360° video (a rollercoaster ride). Results from the attention measure generally align with previous findings in the literature. Translation to Unity allowed for automatic data recording through a combination of Unity’s Raycasting API and tagged “detection zones” for stimuli. Advantages included the removal of added time spent on data collection after setup and greater overall flexibility.

Conclusions:  The initial evaluation of commercial HMDs for tracking attention and potential usability in applications and research targeting individuals with ASD is encouraging, however, further investigation is needed before concerns over sensory sensitivities are fully resolved. While the second phase of the project is still in development, progress and results from ongoing recruitment will be reported.