26776
Social Visual Attention Training Using Virtual Humans and Eye-Tracking

Poster Presentation
Friday, May 11, 2018: 10:00 AM-1:30 PM
Hall Grote Zaal (de Doelen ICC Rotterdam)
O. Grynszpan1, J. Nadel2 and J. C. Martin3, (1)LIMSI CNRS UPR3251, Université Paris-Sud, Orsay, France, (2)French National Centre of Scientific Research (CRNS), Paris, France, (3)LIMSI, Orsay University, Orsay, France
Background: Visual attention is essential to grasp the transient emotional information expressed by faces during human social interactions. During the last two decades, a sizeable amount of literature on eye-tracking has reported diminished visual attention to faces in individuals with Autism Spectrum Disorder (ASD) when they attend to social scenes. Addressing such deficits with educational approaches raises complex issues as there are no systematic rules as to when faces should be looked at during a social interaction.

Objectives: Our goal is to test a novel method for training visual attention to relevant emotional signals expressed by faces based on the use of virtual humans and eye-tracking technology.

Methods: Twenty one adolescents with ASD (3 girls and 18 boys) participated in the study. They were randomized to an experimental group and a control group that were matched for verbal and non-verbal abilities. Participants in the control group were allotted to a computerized educational program in geometry. The experimental group were trained with a system that enabled users to control a graphic interface with their eyes via an eye-tracker. Participants were placed in front of a screen that displayed a virtual human that addressed them. The graphic display was entirely blurred except for a rectangular viewing window that followed the gaze of the participant. One of the utterances of the virtual character could be interpreted in two distinct ways according to the context. The context was provided by the character’s facial expressions that enabled disambiguating this key sentence and therefore understanding the whole message. Participants then had to answer close-choice questions that assessed their understanding of the virtual human’s message. To answer correctly, participants had to look at the relevant emotional features of the face at the right time while they were attending to what the virtual human was saying.

Results: Social and communicative abilities were assessed before training, after training and after a two months follow-up period. The evaluation was based on a battery of social tests that were not used for training. There was a significant improvement on the test which was the most proximal to the training task. It involved understanding a written dialog between characters whose faces could be displayed on demand. Participants in the experimental group scored higher then control participants on this test after training [F(2,38) = 4.76 p = 0.035].

Conclusions: The efficacy of the novel social training method that we designed was supported by the results of this pilot randomized controlled study. The presentation will involve a live demo of the system, that is, the audience will be able to use the software, see the virtual humans and control the viewing window with their eyes via the eye-tracker.

Acknowledgments: This work was supported by a grant from the Orange Foundation (project #71/2012)