Audiovisual Speech Perception in Children with Autism Spectrum Disorders: An Eye Tracking Study

Poster Presentation
Friday, May 11, 2018: 5:30 PM-7:00 PM
Hall Grote Zaal (de Doelen ICC Rotterdam)
S. Feng1, Z. Ma2, Y. Wu1, T. Li3, L. Chen1 and L. Yi3, (1)School of Psychological and Cognitive Sciences, Peking University, Beijing, China, (2)Emory University, Beijing, China, (3)School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
Background: Children with autism spectrum disorders (ASD) show weaker audiovisual speech integration, such as a weaker McGurk effect (Bebko, Schroeder & Weiss, 2014) than typically developing (TD) children do. The reduced McGurk effect could be related to their atypical face processing pattern (Rice, Moriuchi, Jones & Klin, 2012) or their deficits in multisensory temporal processing (Stevenson et al., 2014).

Objectives: We examined whether the potential weaker McGurk effect of children with ASD was due to their reduced fixation time on the face or the mouth. We also investigated whether the children with ASD had a deficit in the audiovisual temporal processing for the audiovisual mismatched McGurk stimuli.

Methods: Forty-nine children with ASD and thirty-one TD children participated in the two experiments in this study. First, we investigated whether the ASD group showed similar McGurk effect as the TD group did. We designed two conditions: the one with the speakers’ eyes open and the other one with the speakers’ eyes closed. In both conditions, we presented the audiovisual matched stimuli and the audiovisual mismatched stimuli (McGurk stimuli). The participants reported what the speaker said. Meanwhile, we recorded participants’ eye movements. Second, we explored the width of the two groups’ temporal binding window (ASD vs TD) using the McGurk stimuli. Participants completed an audiovisual simultaneity judgement task with SOAs of 0, ±40, ±120, ±240, ±360, ±480, and ±680 ms (negative values indicate auditory stimulus appeared first).

Results: In Experiment 1, we found a Group-Condition interaction for the McGurk effect: the ASD group showed significantly less McGurk effect than the TD group did in the open-eye condition only; the two groups performed similarly in the closed-eye condition. The TD group showed weaker McGurk effect in the closed-eye than the open-eye conditions, while ASD group did not differ between these two conditions (Figure 1). The area of interest analysis for the eye movement of the McGurk trials revealed that the ASD group looked shorter at the mouth and the face than the TD group for the audiovisual mismatched stimuli in both conditions (Figure 2). Additional trial by trial generalized linear mixed modeling revealed that longer mouth-looking time or longer face-looking time in the ASD group predicted stronger McGurk effect. In Experiment 2, each participant’s percentages of simultaneous responses were fit by psychometric functions. The audiovisual temporal binding windows for the ASD group were significantly wider than those of the TD group. However, we failed to discover any correlation between the width of the audiovisual temporal binding windows and the strength of the McGurk effect.

Conclusions: The present study provided direct evidence for the relation between the children with ASD’s audiovisual speech perception and their atypical face processing. These findings will further our understanding of the underlying mechanisms for the abnormal audiovisual speech perception in ASD.