“Look Who's Talking!” Gaze Patterns in Implicit and Explicit Onset Asynchrony Detection

Friday, May 18, 2012
Sheraton Hall (Sheraton Centre Toronto)
10:00 AM
R. B. Grossman1,2, A. Schmid2, E. Steinhart2 and T. Mitchell2, (1)Emerson College, Boston, MA, (2)Psychiatry, UMMS Shriver Center, Waltham, MA
Background:  There has been conflicting evidence on whether individuals with ASD can integrate auditory-visual (AV) information for language (van de Smagt et al. 2007, Smith and Bennetto, 2007), but our prior work shows that individuals with ASD are able to use lipreading to recognize auditory-visual asynchrony in an onset asynchrony task (Grossman et al. 2009).

Objectives:  We assessed whether implicit vs. explicit task designs affect the looking patterns for in- and out of synch videos in adolescents with ASD and typically developing (TD) controls. We hypothesized that individuals with ASD would have less directed looking patterns than TD participants in an implicit task, but improve their gaze behavior with explicit task instructions.

Methods:  We recorded a close-up video of a woman talking about baking dessert and presented the same video side-by-side, with one video delayed by 10 frames.  The audio track switched between being in synch with the two sides every 8-18 seconds.  Participants were adolescents (ASD=21, TD=26) aged 8-18. In the implicit task, participants were only told to closely attend to the video. After completion of the implicit task and a distraction task, we showed the same video, with explicit instructions to look at the woman speaking in-synch with the audio track.  We collected eye-tracking data on participants’ fixation patterns during both tasks.

Results:  We calculated percent of looking time to regions of interest on the correct and incorrect side (upper face, lower face, eye region, mouth region, and non-face). We conducted a 2 (group) x 2 (task: explicit vs. implicit) x 2 (side: correct vs. incorrect) repeated measures ANOVA for visual fixations. Results show a main effect for side and a main effect for group, with the TD group looking at the correct side significantly more than their ASD peers in both tasks, as well as a task by accuracy interaction and a task by group by accuracy interaction, indicating that the nature of the task revealed significant differences in each group’s looking patterns.  TD participants looked at the mouth on the correct side significantly more than their ASD peers, while ASD participants looked at the non-face significantly more than their TD peers. No group differences were found for the eye region.  TD participants modulated their response based on task instructions by looking significantly more at the incorrect mouth during the implicit task and significantly more at the correct mouth during the explicit task.  Participants with ASD showed no differences in looking patterns to the mouth. 

Conclusions:  Adolescents with ASD, who can accurately detect onset asynchrony of 10 frames (Grossman et al. 2009), do not modulate their looking patterns in response to explicit task instructions, while TD adolescents gaze significantly more at the mouth region of the in-synch face during the explicit, than implicit task.  Despite their reported preference for looking at the mouth region of a face, adolescents with ASD looked at the mouth significantly less than their TD peers, and instead focused their gaze on non-face regions of the screen. 

| More