23777
Autism Program Environments Rating Scale (APERS): Psychometric Properties

Thursday, May 11, 2017: 5:30 PM-7:00 PM
Golden Gate Ballroom (Marriott Marquis Hotel)
S. L. Odom1, A. W. Cox2, K. Hume3, J. Sideris4, S. Hedges5 and S. Kucharczyk6, (1)University of North Carolina, Chapel Hill, NC, (2)Frank Porter Graham Institute, University of North Carolina - Chapel Hill, Chapel Hill, NC, (3)University of North Carolina, Chapel HIll, Carrboro, NC, (4)Frank Porter Graham Child Development Institute, Chapel Hill, NC, (5)UNC at Greensboro, Chapel Hill, NC, (6)Curriculum & Instruction, University of Arkansas, Fayetteville, AR
Background:  The increased prevalence of Autism Spectrum Disorders (ASD) has created a need for providing high quality programs for students with ASD in public school settings. Although there have been limited attempts to assess quality of programs for students with ASD, none have provided evidence of the psychometric features of the assessments. In fact, the absence of a reliable and valid standardized assessment of program quality has limited program development efforts and led to litigation challenges from parents of students with ASD. The Autism Program Environment Scale (APERS) was designed to assess the quality of educational programs for students and youth with ASD. The APERS generates a summary quality rating by drawing information from 10 domains, shown in Figure 1.

Objectives:  The purpose of this sudy was is to determine the psychometric qualities of the APERS. The research questions addressed are: What is the internal consistency of the APERS? What is the factor structure of the APERS? Is the APERS sensitive to changes across time when a professional development program designed to improve program quality is implemented?

Methods:  The APERS is a 60+ items assessment (different number of items for different forms) that employs a 1-5 Likert rating format. Coders base ratings on observations in schools, interviews, and document review. Preschool/elementary and middle school/high school versions of the scale exist. The APERS has been collected in inclusive and noninclusive programs for students with ASD. The current study will draw from two datasets. The first set of data was collected in 76 classes for students with ASD located in 12 states, by staff from the National Professional Development Center on ASD (NPDC). Data were collected at the beginning of the school year and again at the end. The second set of data were collected in 60 high school programs located in three states, by staff from the Center on Secondary Education for Students with ASD (CSESA) only at the beginning of the school year.

Results:  To examine internal consistency, Cronbach alphas were calculated. For the NPDC data set, alphas were .95 and .96 for the P/E and M/H forms. For the CSESA data set (M/H form only), alphas were .94 for inclusive programs and .96 for noninclusive programs. An exploratory factor analysis of the NPDC data yield strongest evidence for a one factor solution, which was identified as a measure of quality. This factor model was then applied to the CSESA data in a confirmatory factor analysis and yielded similar results. Also, for the NPDC data, t-tests were conducted for P/E and M/H APERS total mean item rating, indicating significant positive changes across time (p < .01 for both, d = 1.28 for P/E and d = 1.10 for M/H) for schools that had been engaged in a professional development project, indicating sensitivity to program effects across time.

Conclusions: Data from these studies provide evidence that the APERS is a reliable and valid measure of the quality of program environments for students with ASD.