31261
Language, Gesture, and Looking Patterns during Viewing of Social Interactions in Children with Autism Spectrum Disorder: Results from the ABC-CT Interim Analysis
Objectives: To investigate relationships among linguistic and gestural abilities with looking patterns to videos of social interactions with and without spoken language.
Methods: Eye-tracking data were collected across five sites from 161 children with ASD between the ages of 6 and 11 years (mean age=8.71 years, mean IQ=95.80) and 64 age-matched typically developing (TD) controls (mean age=8.73 years, mean IQ=114.64). Using a SR EyeLink-1000+ to collect eye-tracking data, participants viewed videos in which two people engaged in a shared activity. In one paradigm, actors spoke to each other; during a second paradigm, actors did not speak. Receptive and expressive language function was assessed by parent report on the Vineland Adaptive Behavior Scales, 3rd Edition. Gesture was assessed with the Autism Diagnostic Observation Schedule, 2nd Edition (ADOS-2) gesture scores. Repeated measures ANOVAs compared the log of the ratio of percent looking to activity compared to percent looking to face between eye-tracking paradigms. The relationship between the log-ratio and the Vineland-3 Communication Scores were analyzed using Pearson’s correlations, and gesture scores were analyzed using Spearman’s correlations.
Results: Across both paradigms, children with ASD looked significantly less to faces compared to activity than TD children (F(1,223)=7.625, p=0.006). While there was no significant difference in looking time to faces in the TD group between speech and non-speech videos, children with ASD looked significantly less to faces compared to activity during videos with speech (F(1,223)=32.931, p=0.001). In children with ASD, higher Vineland-3 expressive language scores significantly correlated with greater looking time to faces during the videos with speech (r(161)=-0.203, p=0.010). In TD children, greater Vineland-3 receptive scores significantly correlated with more looking to faces compared to activity during the speech videos (r(63)=-0.269, p=0.033). There were no significant correlations between Vineland-3 expressive or receptive scores and looking time to faces during non-speech videos in either diagnostic group, and ADOS-2 gesture scores did not significantly correlate with looking time to faces during non-speech or speech videos in either diagnostic group.
Conclusions: In children with ASD, the presence of language in videos of social interactions was associated with decreased attention to faces; however, greater expressive language functioning in this group was related to increased attention to faces. This study highlights that speech may modulate preferential looking to faces in ASD and that eye-tracking studies should carefully consider content of stimuli. Future studies should investigate how associated features of ASD, such as social anxiety, impact attention to faces while viewing verbal and non-verbal interactions.