24482
Describing a Methodology for Evaluating Robot-Assisted Intervention Using Eye-Tracking

Saturday, May 13, 2017: 12:00 PM-1:40 PM
Golden Gate Ballroom (Marriott Marquis Hotel)
R. L. Beights, A. M. Mastergeorge, V. Jain and W. H. Dotson, Texas Tech University, Lubbock, TX
Background: Robots, similar to other new forms of interactive technology, are used in clinical intervention for children with ASD at an increasing rate (Cabibihan et al. 2013; Diehl et al., 2012; Scassellati et al., 2012). However, understanding effective components of child response to robot-assisted intervention (RAI) is largely unknown aside from initial reports of engagement and interest in robots as a novel technology (e.g., Begum et al., 2016). The majority of published research provides little quantitative assessment (e.g., eye-tracking for visual attention assessment) or experimental manipulation that could establish a strong foundation for using robots in early intervention (Coeckelbergh et al., 2015).

Objectives: The purpose of the current study is to describe methodology for evaluating the utility of RAI for young children with ASD using eye-tracking metrics as a primary measure of attention and response to instruction. The primary aim is to describe how to identify relevant factors for treatment effectiveness based on visual attention to instructional targets of imitative motor actions and verbal fill-in statements. Understanding targeted attentional factors related to viewing RAI will guide design and selection of effective technology-facilitated EI strategies.

Methods: A task analysis for assessment of visual attention was completed prior to implementing the protocol with participants 2 to 5 years of age. This task analysis provided detailed steps of stimuli design and experimental session preparation. Six-minute experimental sessions examined visual attention for 28, 10-second robot (RDI) and human-delivered (HDI) instructional stimuli measured directly through eye-tracking. Pilot participants included two male children diagnosed with ASD. Areas of interest (AOI) were defined a priori based on salient features of the instruction. Data collection involved multiple gaze metrics, including gaze fixation and duration.

Results: Pilot data focused on gaze fixation and visualization of fixation points within AOI. Gaze fixation data for motor actions revealed greater visual attention to salient instructional features when viewing RDI versus HDI (Figure 1). Participants showed a greater number of fixation points following the pattern of movement in the RDI condition, as compared to fixation that was more localized to the face in the HDI condition. Gaze fixation data for verbal fill-ins showed visual attention within the AOI (head/mouth) in both RDI and HDI conditions (Figure 2). Gaze duration data indicated that sustained visual attention was greater in the HDI conditions as compared to RDI conditions. Additional data for up to 20 participants will be analyzed and discussed..

Conclusions: Differential patterns of visual attention within pre-defined AOI were observed across RDI and HDI conditions. Participants showed increased gaze fixation when viewing RDI for motor actions, suggesting the robot stimuli promoted increased attention to multiple salient features of instruction as opposed to more selective attention to a specific feature. Conclusions regarding utility of this methodology for evaluating RAI and implications for intervention will be discussed.