28425
Put It to the Test: Do Standardized Testing Practices Hurt Students with Autism?

Poster Presentation
Thursday, May 10, 2018: 5:30 PM-7:00 PM
Hall Grote Zaal (de Doelen ICC Rotterdam)
M. Seidman1, J. T. Russo1 and D. S. Mandell2, (1)University of Pennsylvania, Philadelphia, PA, (2)Center for Mental Health, University of Pennsylvania, Philadelphia, PA
Background: The state of Pennsylvania mandates that all students between third and eighth grade, regardless of cognitive ability, take standardized achievement tests. These tests are administered in school during a 3-week period. State testing guidelines strongly encourage schools to assign teachers as proctors who are not the teachers of the students taking the test. As a result, autism support teachers often are asked to proctor the tests for classrooms that are not their own, effectively removing them from their autism support classrooms for most of several days or weeks in the spring. This practice disrupts the teachers’ daily instructional routines and implementation of evidence based practices. With just a month and a half of school remaining when the standardized testing period ends, teachers may be less likely to resume routine EBP implementation. No research to date has investigated the effect that standardized test proctoring has on autism support teachers’ use of EBPs and thus student cognitive and language ability.

Objectives: to estimate the impact of standardized testing practices on (1) autism support teachers’ implementation of evidence based practices for children with autism and (2) student cognitive/language ability in a large school district in Pennsylvania.

Methods: The sample includes 69 kindergarten-through-second-grade autism support teachers and 143 students in a large school district in Pennsylvania; 19 teachers were pulled from their classrooms to proctor state exams. All teachers received coaching in five EBPs for children with autism: discrete trial training, pivotal response training, data collection, positive reinforcement, and visual schedules. Data on teachers’ accuracy of EBP implementation was collected 3-4 times over a 7 month period through direct observation. Data on teachers’ frequency of EBP implementation was collected 6-9 times over a 9 month period through teacher self-report. Students cognitive and language abilities were measured at baseline and exit using the DAS-II and the Bracken Basics Concepts Scale (Receptive and Expressive). A difference-in-differences design will be used to estimate the extent to which teachers’ administration of the standardized testing to students with and without cognitive disabilities is associated with teachers’ use of EBPs and their accuracy of EBP implementation for their students with autism. The effect of this disruption on student outcomes will be measured using linear regression with random effects for classroom.

Results: Data collection is complete and analyses are underway.

Conclusions: Over the last fifteen years, opponents of federal testing requirements have cited various harmful effects on public schools: teachers are forced to teach to the test, pressure on underperforming schools, teacher evaluations based on test results, pressure on school leaders and states to improve results, takes time away from teacher instruction. Results of this study will lead to a better understanding of the way in which school-wide standardized testing practices influence autism support teacher’s use of evidence based practices. Findings indicating a decrease in teacher EBP use or poorer accuracy of EBP implementation would have significant policy implications on the practices associated with standardized testing administration.