International Meeting for Autism Research: Robot-Mediated Joint Attention Tasks for Children at Risk with ASD: A Step towards Robot-Assisted Intervention

Robot-Mediated Joint Attention Tasks for Children at Risk with ASD: A Step towards Robot-Assisted Intervention

Friday, May 13, 2011
Elizabeth Ballroom E-F and Lirenta Foyer Level 2 (Manchester Grand Hyatt)
9:00 AM
E. T. Bekele1,2, U. Lahiri2,3, J. A. Davidson2,4, Z. Warren2,4 and N. Sarkar2,3, (1)Electrical Engineering, Vanderbilt University, Nashville, TN, (2)Autos Lab, Vanderbilt University, Nashville, TN, (3)Mechanical Engineering, Vanderbilt University, Nashville, TN, (4)Pediatrics, Vanderbilt University, Nashville, TN
Background:

Emerging technology has the potential to play a crucial role in providing powerful, accessible, and individualized interventions to young children with Autism Spectrum Disorders (ASD). Specifically, adaptive robotic technology appears to be very promising in this regard, as the technology may generate intrinsic interest for some children and has the potential to flexibly adapt, scaffold, and reinforce micro-level skills in a manner that may not be possible within traditional intervention modalities. In this work we developed a novel adaptive and individualized, robot-mediated; intervention technology for children with ASD focused on joint attention related skills.

Objectives:

The objective of the work was to develop robot-mediated technology for children with ASD with a potential for assessment and intervention surrounding joint attention skills. Specifically, we attempted to endow this technology with the capacity to automatically detect and respond to shifts in looking patterns and simultaneously adjust joint attention prompts and cues based on performance.

Methods:

We have implemented a multi-camera, distributed head tracker system and integrated it with a small humanoid robot. The robot can initiate joint attention and dynamically generate individualized feedback based on the participant’s viewing pattern inferred from his/her head movement. The tracker uses viewing pattern to trigger the robot or a therapist to give reinforcements. The procedure employs both the humanoid robot (non-social, NS) and a human therapist (social, S) in a single subject multiple baseline strategy. A group of 6 children at risk with autism and a control group of 6 typically developing children of ages 2 – 5 are currently being recruited for this study. A human therapist and the humanoid robot will take turns to initiate the joint attention tasks.

Results:

The head tracker accuracy, consistency, and sensitivity in manual initialization were tested on a 24 inches LCD monitor, which was located 1.5 meter away in front and 0.2 meter above the head of the subject. The average error was 2.6cm with a speed of up to 20 frames per second over 20 target points uniformly distributed across the screen. The functionality of the system was tested with a typically developing child of age 12. In three tests, the child: 1) follows the prompt normally; 2) follows the prompt with frequent head movement; and 3) intentionally delays in following the prompt. The system measures performance metrics for these trials such as the latency of the head movement, fixation durations, and frequency of looking. Finally, the system’s hierarchical prompting protocol was validated: if the child did not look, the robot adds the child name in the prompt; and if the child still did not respond, the robot adds pointing gesture.

Conclusions:

Initial study with children with ASD will help assess the effectiveness of the system. We are endowing the system with dynamic reinforcement given through the targets in a context aware manner. The system can be extended to other tasks beyond the joint attention task. This can potentially lead to the development of robot-assisted systems for early diagnosis and treatment for children at high-risk with ASD.

| More