23832
The Sense of Leading Gaze in Joint Attention

Thursday, May 11, 2017: 12:00 PM-1:40 PM
Golden Gate Ballroom (Marriott Marquis Hotel)
O. Grynszpan1, J. Nadel2 and J. C. Martin3, (1)CNRS UMR 7222, Institute of Intelligent Systems and Robotics, CNRS ISIR UMR 7222, Paris, France, (2)French National Centre of Scientific Research (CRNS), Paris, FRANCE, (3)LIMSI, Université Paris Sud, Orsay, France
Background:  Gaze plays a pivotal role in human communication, especially for coordinating attention. The Autism Spectrum Disorder (ASD) is considered to be strongly associated with impairments in joint attention. The ability to lead the gaze direction of others forms the backbone of joint attention. To be functional for interpersonal communication, this ability entails that the sensorimotor feedback yielded by the gaze reactions of others elicits a sense of leading gaze in the individual who initiates eye movements.

Objectives:  This study investigates a specific aspect of joint attention in ASD, that is, the emergence of the sense that one is leading the attentional focus of others.

Methods:  Using eye-tracking and virtual reality technology, we designed avatars that can follow the gaze of participants in real time. Seventeen adults with ASD and 17 typical adults matched on IQ participated. During a training phase, participants were alternately exposed to an avatar that followed their gaze and an avatar that moved its gaze independently from the participants. The avatars were surrounded by three objects that changed with each new trial (Figure). After each trial participants had to indicate what object they preferred and guess what object the avatar had preferred. In a subsequent test phase, they were facing the two avatars at the same time and three objects displayed around them. Again, one avatar was following their gaze while the other was not. The task was the same as before. Eye-tracking data served as measures of attention. Participants’ responses regarding their preferred objects and the preferred objects of the avatars yielded a measure of their awareness that a link existed between them and one of the avatars, that is, the avatar following their gaze.

Results:  During the final half of the training phase, typical participants selected the same preferred object for them and the gaze-following avatar more often than for the independently gazing avatar [Z = 2.08, p = 0.038]. By contrast, such an effect was not observed in the group with ASD. During the test phase, eye-tracking data yielded an interaction between the groups and the gaze behaviors of the avatars [F (1, 32) = 5.25, p = 0.029, η² = 0.14]. Typical participants looked more at the independently gazing avatar than participants with ASD who were more focused on the gaze-following avatar.

Conclusions:  Attentional measures suggest that participants with ASD continued to be intrigued by the gaze-following avatar during the test phase, while typical participants seemed to have already recognized the link between them and this avatar during the training phase. Impairments in joint attention could be linked to a failure in sensing oneself as an agent leading the attentional focus of others.

Figure: Snapshot of the virtual scene shown to each participant during the training phase. An avatar was displayed with three consumer goods located on its side and in front of it. The avatar either followed the gaze of the participant or displayed gaze patterns that were independent from the participant.