17785
Efficacy of a Facial Emotion Training Program for Children and Teens with ASD

Saturday, May 17, 2014
Atrium Ballroom (Marriott Marquis Atlanta)
B. Evans-Smith1, N. M. Russo-Ponsaran2, J. Russo2, J. K. Johnson2 and C. McKown3, (1)Behavioral Sciences; Rush NeuroBehavioral Center, Rush University Medical Center, Skokie, IL, (2)Rush NeuroBehavioral Center, Department of Behavioral Sciences, Rush University Medical Center, Skokie, IL, (3)Rush University Medical Center, Skokie, IL
Background:  

Some children with autism spectrum disorders (ASD) have difficulty interpreting others feelings and expressing their own through facial expressions (Klin, A., Sparrow, S. et al, 1999). They may not direct their visual attention to important facial features that convey emotional information (Moore, Heavey, & Reidy, 2012), struggle to process rapid stimuli (Rump, K., Giovannelli, J., et al, 2009), and have difficulty expressing emotion with their faces (Yirmiya, et al., 1989). The authors developed the Facial Emotion Training Program which incorporated methods to address these challenges unique to current treatment programs (Russo-Ponsaran, Evans-Smith, et al., 2013).

Objectives:  

Our main objective was to demonstrate that children who participated in this training program would improve in recognition (speed and accuracy) and self-expression of facial emotion relative to a wait list control group. A secondary objective was to evaluate skill generalization.

Methods:  

Twenty-five high-functioning, verbal children with ASD (mean age = 11.1 years, range 8-15 years; mean IQ 99.8+20) who demonstrated a facial emotion recognition deficit were block randomized to an active intervention or waitlist control group. The intervention was a modification of a commercially available computerized facial emotion training tool, the MiXTM by Humintell, which uses dynamic adult faces. The modifications addressed the special learning needs of individuals with ASD which included: coach-assistance, a combination of didactic instruction for seven basic emotions, repeated practice with increased presentation speeds, guided attention to relevant facial cues, and imitation of expressions. Training occurred twice a week for approximately 45-60 minutes across an average of six sessions. Outcome measures assessed (1) dynamic affect recognition with the MiXTM and Child and Adolescent Social Perception Scale (CASP), (2) static affect recognition with the CATS, DANVA2, NEPSY II Affect Recognition subtest; (3) self-expression (coded videotape); and (4) social functioning with parent-report on the Emory Dyssemia Index and Vineland Interpersonal Relations subscale prior to and after treatment.

Results:  

A series of ANCOVA’s were run to assess between group differences, controlling for full scale IQ, age, autism severity, and pre-test ability for each measure. Pre-test values between the active intervention and waitlist control groups were not significant. Analyses with emotion recognition measures indicated highly significant main effects for the MiX, DANVA2, and NEPSYII (p<.05, all comparisons) immediately following training. Coding for the CASP is currently underway. There were no between group differences for parent-report of social functioning. Self-expression significantly improved after training (p=.001). Paired t-tests for within group comparisons were also significant for most recognition measures: MiX, DANVA2, NEPSYII, and CATS 3 Faces; the parent report, Emory Dyssemia Index Facial Expression and Nonverbal Receptivity subscales (p<.05); and for self-expression (p<.000).

Conclusions:  

The Facial Emotion Training Program enabled children and teens with ASD to more accurately and quickly identify feelings in facial expressions with stimuli from both the training tool and generalization measures and demonstrate improved self-expression with their faces. Anecdotal reports from parents also indicated increased awareness to emotions in daily interactions.