31937
A Brief Remote Eye Tracking Paradigm May Enhance Clinical Evaluation of Autism Risk and Language Level

Panel Presentation
Friday, May 3, 2019: 11:45 AM
Room: 517B (Palais des congres de Montreal)
T. W. Frazier1, E. W. Klingemier2, E. Youngstrom3 and A. Y. Hardan4, (1)Autism Speaks, New York, NY, (2)Cleveland Clinic Center for Autism, Cleveland, OH, (3)University of North Carolina at Chapel Hill, Chapel Hill, NC, (4)Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA
Background: Brief, scalable, objective measures are needed to inform clinical assessment of children at risk for autism spectrum disorder (ASD). Numerous eye tracking studies have demonstrated social attention differences in people with autism that emerge early are relatively stable across development. Recent research has shown that aggregation of gaze measures across diverse stimuli has the potential to inform autism risk. Remote gaze tracking may also have the potential to rapidly evaluate cognitive functions, such as language ability, adding clinical value where traditional face-to-face testing is difficult to complete. Below we describe validation of an empirically derived autism risk measure and demonstrate how the paradigm can be extended to include rapid evaluation of language level.

Objectives: To evaluate aggregate gaze-based measures informing autism risk and language level.

Methods: Data were collected from 201 youth (ages 1.6-17.6) referred for clinical evaluation of ASD to empirically-derive (train and test) autism risk and symptom indices. After calibration, participants viewed an ~7-min stimulus battery that included 45 dynamic social stimuli from 8 distinct paradigms and 6 static receptive language arrays. Our published data from this sample indicated that autism risk and symptom indices had replicable validity for differentiating youth with ASD from a challenging comparison sample of referred individuals with other developmental conditions. Building on these data, we are collecting a separate validation sample of 30 youth (ages 2-14) referred for ASD evaluation viewed the same stimulus battery as the original cohort. Minor variations in the original procedures and equipment were intentionally implemented to ensure validity would be maintained during clinical implementation.

To develop a gaze-based language index, we evaluated a subset of our original cohort (n=114) that achieved a valid eye tracking and clinical language evaluation. The gaze-based language index was created by standardizing and averaging fixation time percent, fixation count, and average fixation duration to 16 receptive language targets. Analyses evaluated the relationships between the gaze-based language index and clinical language test scores.

Results: Initial results from the independent validation cohort indicated that the autism risk index differentiated ASD from non-ASD cases (AUC=.78). Gaze-based autism symptom indices showed significant correlations with ADOS-2 severity scores (r>.30). Using data from the original cohort, the gaze-based language measure had strong relationships with clinical language scores (Figure 1; r>.47) and good sensitivity to language impairment (Figure 2; AUC=.71; sensitivity=.82 at specificity=.71).

Conclusions: Brief, scalable, objective, eye-tracking measures aggregated across social and receptive language stimuli show strong potential to inform clinical assessment of autism and language level. Future research is needed to validate the gaze-based language measure, potentially adding stimuli to improve measurement across the full range of language levels. Additional studies of the autism risk and symptom indices are needed within clinical trials to evaluate sensitivity to change and in community or population samples to evaluate screening potential. Machine learning approaches may increase the validity of these promising gaze-based autism risk and language measures.