International Meeting for Autism Research: Development of a Fidelity of Implementation Tool for a School-Based Intervention

Development of a Fidelity of Implementation Tool for a School-Based Intervention

Thursday, May 12, 2011
Elizabeth Ballroom E-F and Lirenta Foyer Level 2 (Manchester Grand Hyatt)
9:00 AM
J. Kinard, K. P. Wilson, L. R. Watson, B. Boyd, S. Horvath and J. Grisnik, University of North Carolina at Chapel Hill, Chapel Hill, NC
Background:  Measurement of intervention fidelity is vital when determining evidence-based practices for individuals with autism.  Without knowledge of intervention fidelity, researchers cannot determine the causes of poor treatment outcomes, which could result from a failed intervention—or from poor fidelity of implementation (Dane & Schneider, 1998).  Unfortunately, fidelity measurement is often overlooked by researchers.  Investigators may not understand this process, or may not know how to apply fidelity measures from other fields into their own work (Century, Rudnick, & Freeman, 2010).  To overcome these difficulties, this research team created a multi-component intervention fidelity measure for use in a school-based intervention for preschoolers with autism.  This measure incorporates the following dimensions of intervention fidelity: implementer knowledge and planning, intervention quality and dosage, and progress monitoring (Cordray, Hulleman, & Lesnick, 2008; Dane & Schneider, 1998).  The team predicts that: a) the measure will differentiate intervention and non-intervention classrooms, and b) through our descriptions of how we created the measure, other investigators can adapt the measure to their own intervention research. 

Objectives:  (a) To describe the process of creating a multi-component measurement of intervention fidelity; and (b) To report psychometric data and data illustrating the tool’s effectiveness in differentiating intervention and non-intervention classrooms.

Methods:  

This multi-component tool was developed through examination of extant research, modification of a previously-developed measure, and consideration of the ASAP intervention model and components. In addition to describing this tool development process, the presentation will report results of a quasi-experimental comparison group study used to pilot this fidelity measure in preschool classrooms using the ASAP intervention and comparison/business-as-usual classrooms. This pilot study includes trial measurement of implementer knowledge and planning, intervention quality and dosage, and progress monitoring across groups in order to identify the aspects of intervention fidelity which best discriminate intervention and business-as-usual classrooms (discriminant analysis results will be reported). Furthermore, the pilot study of the ASAP intervention fidelity measure includes assessments of the tool’s psychometric properties through examination of inter-rater reliability and stability across three bi-monthly measurements.

Results:  Based on the intervention literature base, this research team’s experience, and the ASAP intervention components, the following areas were identified as vital in measuring intervention fidelity in preschool classrooms implementing the ASAP intervention: Intervention quality (e.g., appropriateness of target goals/materials, responsiveness to child, appropriateness of prompts), implementer knowledge (e.g., language used to talk about goals and their importance), intervention dosage, team planning (e.g., team meeting frequency), and progress monitoring practices (e.g., data collection, complexity of notes). Psychometric properties and discriminant analysis results are forthcoming and will be reported in this presentation, along with implications for future revisions of the measure.

Conclusions:  The importance of measuring intervention fidelity has been established and the complexity of this undertaking is supported by the experience of the ASAP research team in its development and piloting of a multi-component fidelity measure.

| More