Piloting “Autoscreen”: Preliminary Results of a Novel Digital Tool for Clinically Efficient Assessment and Decision Making for Toddlers with ASD Concerns

Poster Presentation
Friday, May 3, 2019: 5:30 PM-7:00 PM
Room: 710 (Palais des congres de Montreal)
J. W. Wade1, A. Swanson2, A. S. Weitlauf3, Q. Humberd4, N. Sarkar1,5 and Z. Warren6, (1)Adaptive Technology Consulting, Murfreesboro, TN, (2)Vanderbilt Kennedy Center, Vanderbilt University Medical Center, NASHVILLE, TN, (3)Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, (4)Independent Consultant, Nashville, TN, (5)Vanderbilt University, Nashville, TN, (6)Vanderbilt University Medical Center, Nashville, TN
Background: Early accurate identification of young children with Autism Spectrum Disorder (ASD) represents a pressing clinical care challenge. Current AAP practice guidelines endorse ASD screening at 18 and 24 months of age, followed by referral and evaluation for those at-risk by qualified providers. At present, a variety of resource barriers exist, resulting in large numbers of children not being screened and prolonged waits for diagnostic assessments. Consequently, the average age of diagnosis in the US remains between 4 and 5 years of age.

Objectives: Based on advanced computational analysis of a large sample of toddlers receiving gold-standard evaluations for ASD and a dynamic design process involving leading diagnostic experts at partner academic institutions, we created a stand-alone screening application designed to present community pediatric providers with a 15-minute ASD risk assessment method (i.e., a structured interaction via app-based instructions and in-app rating system of ASD symptoms).

Methods: This pilot study included 18- to 36-month-old children (n = 24) clinically referred for ASD evaluation as well as professionals and paraprofessionals (n = 24) licensed to conduct clinical ASD evaluations (e.g., pediatrics, clinical psychology, and speech-language pathology). Each provider used Autoscreen to assess a different child. Immediately afterward, a different blinded provider conducted a full diagnostic evaluation for the child. Meanwhile, the Autoscreen provider entered behavioral codes into Autoscreen and a risk index was automatically computed.

Results: Participating professionals favorably regarded our functionally robust prototype. Providers reported (a) excellent usability of the tool (System Usability Scale mean = 87.36), (b) high acceptability of the tool (Acceptability, Likely Effectiveness, Feasibility, and Appropriateness Questionnaire mean = 87.28), (c) and 88% agreement with Autoscreen’s dichotomous risk index (i.e., high versus low risk) using the a priori predictive model. Based on a receiver-operating characteristic (ROC) analysis, a comparison of predicted ASD risk by Autoscreen with best estimate clinical diagnoses presented encouraging evidence of Autoscreen’s potential as an instrument for reliable ASD risk classification. Levels of accuracy, sensitivity, specificity, and other performance metrics demonstrated the potential of Autoscreen to compete with established screeners while simultaneously addressing several pain points of providers. Although quite preliminary, an observed accuracy of 79%, sensitivity of 0.77, and specificity of 0.86, outperform many commonly used screening instruments.

Conclusions: Ultimately, we hypothesize that Autoscreen could powerfully enhance early identification of children with ASD and improve provider confidence around risk assessment and referral decisions. Although these early results are promising, there are a few key areas in which Autoscreen must be improved—both in terms of risk classification and technological enhancement—before it could be considered ready for real-world deployment. Most notably, a larger, higher-powered study capable of evaluating reliability metrics is required to demonstrate Autoscreen’s credibility as an impactful clinical tool. Such a study is part of planned future work.