28679
Do Children with Autism Spectrum Disorder Learn to Distrust and Deceive a Social Robot?

Oral Presentation
Friday, May 11, 2018: 3:04 PM
Grote Zaal (de Doelen ICC Rotterdam)
Y. Zhang1, W. Song2, Z. Tan3, J. Chen4, H. Zhu5 and L. Yi6, (1)Peking University, beijing, China, (2)Centre for Optical and Electromagnetic Research, South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, China, (3)South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, China, (4)School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden, (5)Child Developmental & Behavioral Center, Third Affiliated Hospital of SUN YAT-SEN University, Guangzhou, China, (6)School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
Background:

Children with autism spectrum disorder (ASD) display difficulty in understanding other people’s mental states (Baron-Cohen, 2001) and learning complex social rules in interpersonal interaction (Jones et al., 2013). For example, children with ASD showed difficulty in learning the social rules to distrust and deceive an adult who repeatedly deceive them (Yi et al., 2014). However, little is known about how children with ASD learn social rules from social robots.

Robots can offer children with autism spectrum disorder (ASD) a safe and predictable environment and help develop social skills (Dautenhahn et al., 2004). Studies on using interactive robot in therapy that helps children with ASD learn social skills have shown promising outcomes (Feil-Seifer et al., 2008; Kozima et al., 2007; Stanton et al., 2008). In our study, we attempt to use social robot to teach hidden social rules to children with ASD in the trust and deception games.

Objectives:

To examine how children with ASD learn social rules to distrust and deceive a social robot.

Methods:

Twenty-one 5 to 8-year-old children with ASD and 21 age-matched typically-developing (TD) peers participated in the trust and deception tasks with a social robot to try to win as many tokens as possible. In the trust task, children were asked to find a token in three identical cups, while the robot providing misleading information about the location of the token (pointing to an empty cup). This procedure repeated for 10 trials. In the deception task, children were asked to switch the roles with the robot, i.e., the children hid the token and the robot looked for it. With this manipulation, we aimed to investigate whether children would deceive the robot. Children had to hide the token and then indicate a location of the token for the robot to find over another 10 trials.

Results:

The analysis of the overall performance across 10 trials indicated that: (a) children with ASD showed a trust bias towards the robot (t = -3.77, p = .001); (b) they were also less likely to deceive the robot (t = -2.47, p = .02). This finding replicated the difficulty of children with ASD in distrusting and deceiving a human experimenter in the previous research (Yi et al., 2014).

The survival analysis based on the trial-by-trial data, as shown in Figure 1, further indicated the learning process over the 10 trials in both groups. Results showed that in the trust task, the ASD group learned slower than the TD group to distrust the robot (p < .001). The group difference in the deception task were marginally significant (p = .064).

Conclusions:

Our study indicated that when confronting the social robot, children with ASD showed a trust bias and difficulty engaging in deception compared to the TD children. However, it does not mean inferring mental states from the robot is more difficult than that from the human. Therefore, it might still be promising to develop intervention protocol using robots based on this paradigm to teach the social rules to children with ASD.