20155
Using Robots As Therapeutic Agents to Teach Children with Autism Recognize Facial Expression

Friday, May 15, 2015: 10:00 AM-1:30 PM
Imperial Ballroom (Grand America Hotel)
S. Mavadati1, H. Feng2, P. B. Sanger1, S. Silver1, A. Gutierrez3 and M. H. Mahoor2, (1)University of Denver, Denver, CO, (2)Electrical and Computer Engineering, University of Denver, Denver, CO, (3)Psychology, Florida International University, Miami, FL
Background:  Recognizing and mimicking facial expressions are important cues for building great rapport and relationship in human-human communication. Individuals with Autism Spectrum Disorder (ASD) have often difficulties in recognizing and mimicking social cues, such as facial expressions. In the last decade several studies have shown the individuals with ASD have superior engagement toward objects and particularly robots. However, majority of the studies have focused on investigating robot’s appearances and the engineering design concepts and very few research have investigated the effectiveness of robots in therapeutic and treatment applications. In fact, the critical question that “how robots can help individuals with autism to practice and learn some social communicational skills and applied them in their daily interactions” have not been addressed yet.

Objectives:  In a multidisciplinary research study we have explored how robot-based therapeutic sessions can be effective and to what extent they can improve the social-experiences of children with ASD. We developed and executed a robot-based multi-session therapeutic protocol which consists of three phases (i.e. baseline, Intervention and human-validation sessions) that can serve as a treatment mechanism for individuals with ASD.

Methods:  We recruited seven (2F/5M) children 6-13 years old (Mean=10.14 years), diagnosed with High Functioning Autism. We employed NAO, a programmable humanoid robot, to interact with children in a series of social games for several sessions. We captured all the visual and audio communications between NAO and the child using multiple cameras. All the capturing devices were connected to a monitoring system outside of the study room, where a coder observed and annotated the responses of the child online. In every session, NAO asked the child to recognize the type of prototypic facial expression (i.e. happy, sad, angry, and neutral) shown on five different photos. In the ‘baseline’ sessions we calculated the prior knowledge of every child about the emotion and facial expression concepts. In the ‘intervention’ sessions, NAO provides some verbal feedback (if needed), to help the child recognizing the facial expression. After finishing the intervention sessions, we included two ‘human-validation’ sessions (with no feedback) to evaluate how well the child can apply the learned concepts when a human is replaced with NAO. 

Results:  The following Table demonstrates the Mean and Standard Deviation (STD) of face recognition rates for all subjects in three phases of our study.

Facial Expression Recognition Rate(%)

Baseline

Intervention

Human-Validation

Mean(STD)

69.52(36.28)

85.83(20.54)

94.28(15.11)

Conclusions:  The results demonstrate the effectiveness of NAO for teaching and improving facial expression recognition (FER) skills by children with ASD. More specifically, in the baseline, the low FER rate (69.52%) with high variability (STD=36.28) demonstrate that overall, participants had difficulty recognizing expressions. The intervention results, confirms that NAO can teach children recognizing facial expressions reliably (higher accuracy with lower STD). Interestingly, in the human-validation phase children could even recognize the basic facial expressions with a higher accuracy (94%) and very limited variability (STD = 15.11). These results conclude that robot-based feedback and intervention with a customized protocol can improve the learning capabilities and social skills of children with ASD.