19615
Are We Failing the M-CHAT? Self-Assessment in a Diverse Community Sample

Thursday, May 14, 2015: 11:30 AM-1:30 PM
Imperial Ballroom (Grand America Hotel)
C. B. Nadler1,2, C. Low-Kapalu1, L. Pham1 and S. S. Nyp1,2, (1)Developmental and Behavioral Sciences, Children's Mercy Kansas City, Kansas City, MO, (2)University of Missouri - Kansas City School of Medicine, Kansas City, MO
Background:  The American Academy of Pediatrics (AAP) recommends standardized screening for autism at 18 and 24/30 months, but this standard is met inconsistently. Moreover, when practitioners (and researchers) implement evidence-based screenings, little attention is paid to the fidelity of implementation. In the case of the M-CHAT/F and M-CHAT-R/F, an evidence-based scoring algorithm prompts providers to conduct a structured follow-up interview and/or refer positive screens for evaluation and early intervention. Without the follow-up interview, psychometric value (i.e., sensitivity) of the measure is lost. No quality assurance procedures are available to quantify the degree to which autism screening in practice or research contexts adheres to validated implementation standards.

Objectives:  To investigate the implementation fidelity of autism screening via the M-CHAT/F (and/or M-CHAT-R/F) conducted in the primary care clinics of a large urban hospital. 

Methods:  Electronic health records for 18 and 24/30 month pediatric well child visits during a one-month study period were manually reviewed to extract autism screening implementation parameters.

Results:  The review yielded a sample of 281 eligible clinic visits serving children who were majority male (60.9%) and diverse (42.7% African American, 31.7% Hispanic, 10.3% White). Primary care providers documented that 4.3% of visits included a positive screen based on the M-CHAT; in contrast, re-scoring of parent-completed M-CHATs yielded 13.7% of visits with a positive screen (based on both critical item and total score approaches). No visit documented use of the structured M-CHAT follow-up interview, or any components of the M-CHAT-R/F. Providers’ sensitivity with the M-CHAT was 0.214 (identifying 6 of 28 positive screens). Providers documented a referral for early intervention or evaluation services in 50% of cases (6 of 12) when a positive screen was identified in clinic; of children who screened positive based on rescoring of the parent M-CHAT, only 14.3% (4 of 28) were referred.

Conclusions:  Manual chart review allowed for direct evaluation of the clinical implementation and interpretation of the M-CHAT in a large urban hospital. Despite routine administration of the M-CHAT at 18 and 24/30 months, providers failed to identify over 75% of children who screened positive on the measure. Pediatrician surveillance (i.e., clinical judgment without the aid of standardized tools) is well known to identify only 20-30% of children with developmental delays, and employing a standardized tool incorrectly does not appear to add incremental value in clinical practice. This pilot project demonstrates the viability of quantifying implementation and interpretation fidelity for autism screening. Efforts are underway to use this methodology to monitor quality improvement activities focused on provider education and training, as well as systems-level changes to facilitate standardized autism screening. Future studies are necessary to determine the extent to which other hospitals who use measures like the M-CHAT fail to monitor implementation fidelity, as well as how monitoring improves functional adherence to AAP guidelines. Research with developmental screening tools may also need to routinely include implementation fidelity data to better characterize results in community samples, given consistent (and even explicit) omission of the structured follow-up in recent publications.