17444
Facial Emotion Recognition and Expression Deficits in Children with ASD and the Effects of Training

Saturday, May 17, 2014
Atrium Ballroom (Marriott Marquis Atlanta)
J. Russo, B. Evans-Smith and N. M. Russo-Ponsaran, Rush NeuroBehavioral Center, Department of Behavioral Sciences, Rush University Medical Center, Skokie, IL
Background:  The better one is able to express an emotion, the better one can recognize emotions in others (e.g. Iacoboni & Mazziotta, 2007). Deficits in recognition and expression of facial emotions are common in autism spectrum disorders (ASD). Therefore, we developed a computer and coach assisted facial emotion training program for children with ASD that incorporates both emotion recognition and expression.

Objectives:  Utilizing a wait list control (WLC) and active intervention (AI) design, we tested recognition and expression of facial emotions before and after training. Our first objective was to show improvements in recognition and expression accuracy after training. Our second objective was to establish whether there was a trend for specific emotions to be more difficult for children to recognize and express, even after training.

Methods:  21 children with ASD (11=WLC; 10=AI; ages 8-16 years) participated; diagnoses were confirmed with the SCQ, ADI-R, and ADOS. Eligible children demonstrated a facial emotion recognition deficit based on direct assessments. Children were randomly assigned to either the WLC or the AI group. We chose a commercially-available computer program (MiX™ by Humintell) as the primary training tool, but modified it to include coaching assistance to direct children’s attention, to coach on expression exercises, and to navigate the program. The program includes seven emotions: anger, disgust, fear, surprise, contempt, joy, and sadness.  Before training, children completed the MiX pre-test and were asked to show the seven emotions using their faces.  After training, children completed the MiX post-test and again demonstrated each facial expression. Each child in the WLC group was matched with an AI participant and completed pre- and post- testing following the same temporal schedule, but did not receive any intervention until completing the full testing protocol. Performance on the MiX was captured as percent correct for each emotion. Individual emotion expressions were video-recorded and then coded on a scale of 0-2 (0= no movement of their face; 1= attempted display of the emotion; 2= appropriate display of the emotion).

Results:  Within the AI group, performance on perception improved significantly from pre- to post-testing for all seven emotions (p≤.05, all comparisons). Similarly, emotion expression was significantly improved from pre- to post-testing for all emotions except fear and sadness within the AI group (p≤.05). WLC children did not have a significant change in performance on either perception or expression at post-test. When rank ordering the difficulty of recognition for each emotion for all children at pre-test, contempt was the most difficult (mean=30%), followed by disgust (mean=33%), fear (mean=46%), sadness (mean=58%), anger (mean=60%), surprise (mean=68%) and finally, joy (mean=83%). Similarly, contempt was the most difficult and joy the easiest emotion to express at pre-test.

Conclusions:  Results support the effectiveness of the training program to improve children’s perception and expression of facial emotions. Future studies can identify which emotions are particularly difficult for children with ASD, with an eye towards tailoring clinical interventions for a child. Further investigation is needed to draw causal inferences as to the relationship between the mastery of perception and expression.