31937
A Brief Remote Eye Tracking Paradigm May Enhance Clinical Evaluation of Autism Risk and Language Level
Objectives: To evaluate aggregate gaze-based measures informing autism risk and language level.
Methods: Data were collected from 201 youth (ages 1.6-17.6) referred for clinical evaluation of ASD to empirically-derive (train and test) autism risk and symptom indices. After calibration, participants viewed an ~7-min stimulus battery that included 45 dynamic social stimuli from 8 distinct paradigms and 6 static receptive language arrays. Our published data from this sample indicated that autism risk and symptom indices had replicable validity for differentiating youth with ASD from a challenging comparison sample of referred individuals with other developmental conditions. Building on these data, we are collecting a separate validation sample of 30 youth (ages 2-14) referred for ASD evaluation viewed the same stimulus battery as the original cohort. Minor variations in the original procedures and equipment were intentionally implemented to ensure validity would be maintained during clinical implementation.
To develop a gaze-based language index, we evaluated a subset of our original cohort (n=114) that achieved a valid eye tracking and clinical language evaluation. The gaze-based language index was created by standardizing and averaging fixation time percent, fixation count, and average fixation duration to 16 receptive language targets. Analyses evaluated the relationships between the gaze-based language index and clinical language test scores.
Results: Initial results from the independent validation cohort indicated that the autism risk index differentiated ASD from non-ASD cases (AUC=.78). Gaze-based autism symptom indices showed significant correlations with ADOS-2 severity scores (r>.30). Using data from the original cohort, the gaze-based language measure had strong relationships with clinical language scores (Figure 1; r>.47) and good sensitivity to language impairment (Figure 2; AUC=.71; sensitivity=.82 at specificity=.71).
Conclusions: Brief, scalable, objective, eye-tracking measures aggregated across social and receptive language stimuli show strong potential to inform clinical assessment of autism and language level. Future research is needed to validate the gaze-based language measure, potentially adding stimuli to improve measurement across the full range of language levels. Additional studies of the autism risk and symptom indices are needed within clinical trials to evaluate sensitivity to change and in community or population samples to evaluate screening potential. Machine learning approaches may increase the validity of these promising gaze-based autism risk and language measures.