29912
Comparing Child Verbalisations during Robot-Assisted and Adult-Led Conditions of an Emotion-Recognition Teaching Programme

Poster Presentation
Thursday, May 2, 2019: 11:30 AM-1:30 PM
Room: 710 (Palais des congres de Montreal)
A. Williams1, A. M. Alcorn1, E. Ainger2, A. Baird3, N. Cummins4, B. Schuller4 and E. Pellicano5, (1)Centre for Research in Autism and Education, University College London, London, United Kingdom, (2)East London NHS Foundation Trust, London, United Kingdom, (3)Chair for Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Augsburg, Germany, (4)Informatics, University of Augsburg, Augsburg, Germany, (5)Macquarie University, Sydney, Australia
Background:

Robot-assisted interactions have the potential to be beneficial for autistic children within an educational environment. Robots are more predictable in their actions than humans and, as such, are thought to place lower levels of cognitive and social demands on autistic children. In the context of a more predictable social environment, children may interact with robots differently than they do with adults, including with regard to verbalisations, or voiced utterances (words and non-words).

Objectives:

We sought to describe and compare the number and type of child verbalisations and the presence of autism-related vocal-features within these verbalisations, during robot-assisted and adult-led interactions, in a group of autistic children with additional intellectual disabilities and limited spoken communication.

Methods:

Twenty-four autistic children aged between 5 and 12 years (7 female) took part in a robot-assisted (n=12; M age = 8.0 years; SD = 2.7) or adult-led (n=12; M age = 8.2 years; SD = 2.4) condition of an emotion recognition teaching programme (Howlin, Baron-Cohen & Hadwin, 1999). There were no significant group differences in the two teaching conditions in terms of age (p=0.88), autism severity (as measured by CARS2-ST, p=0.28) and verbal language scores (using a bespoke measure; p=0.11). Children participated in multiple video and audio recorded sessions over the course of one week, ranging from one to five sessions (M=3.7 sessions; SD=1.0). Three researchers independently annotated child verbalisations in audio and video recordings to determine the type of verbalisation (e.g., speech, shout, non-speech) and the presence of autism-related vocal features (e.g., echolalia, stereotyped speech). A simple “majority voting” method was used to determine a final dataset with agreed labels for verbalisations.

Results:

Of the 17,265 verbalisations from all (adult, robot, child) speakers, 5,209 (30.2%) were child verbalisations. Speech was the most commonly labelled type of child verbalisation in each teaching condition, although there was no significant difference between conditions (robot-assisted: 43.7% of verbalisations; adult-led: 46.8% verbalisations). There were also few significant differences in the type of child verbalisations: while there were more ‘shouting’ verbalisations in the adult-led compared to the robot-assisted condition (p=0.04), there were no significant group differences in the number of autism-related vocal features (i.e., echolalia, pronoun errors). Furthermore, most of the verbalisations (89.7%) across teaching conditions were judged not to contain any autism-related vocal features.

Conclusions:

Overall, and unexpectedly, the number and type of child verbalisations in adult-led and robot-assisted interactions were very similar. In the context of a more predictable social environment, there was no difference in the number of child verbalisations or the amount of autism-related vocal features present in robot-assisted compared to adult-led interactions. Perhaps surprisingly for a participant group with limited verbal language, most of the verbalisations in both teaching conditions were judged not to contain any autism-related vocal features. Future work will focus on examining the function and content of verbalisations, including the coding of unusual affect within robot-assisted and adult-led interactions.