26804
Do Infants at Risk for ASD Attend More to the Mouth When Watching Dynamic Videos?

Poster Presentation
Friday, May 11, 2018: 11:30 AM-1:30 PM
Hall Grote Zaal (de Doelen ICC Rotterdam)
K. H. Finch1, H. Tager-Flusberg2 and C. A. Nelson3, (1)Boston University, Boston, MA, (2)Psychological and Brain Sciences, Boston University, Boston, MA, (3)Boston Children's Hospital, Boston, MA
Background:

Typically developing 4- to 12-month-olds attend more to the mouth when watching videos of a woman speaking directly to them which is thought to aid in understanding speech (Lewkowicz & Hansen-Tift, 2011). This process might be impaired in infants who develop ASD as they show different gaze patterns to speaking faces (Shic et al., 2014). Moreover, differences in gaze patterns to speaking faces have been found in 9-month-olds who are at familial risk for ASD but do not develop ASD (Guiraud et al., 2012). However, it is unclear if this difference extends to all infants at risk for ASD.

Objectives:

Our current study investigated differences in gaze patterns in infants at risk for developing ASD. We included infants at familial risk and infants who fail a 12-month screener as well as an age-matched group of infants at low-risk.

Methods:

Participants

67 English-speaking 12- to 14-month-olds were divided into two groups: 1) low-risk typically developing controls (LR; N=47) or 2) high-risk for ASD (HR N=20) defined as either having an older sibling with ASD (HRA; N=10) or failing the Communication and Symbolic Behavior Scales Checklist (Wetherby & Prizand, 2002) at 12 months (HRS; N=10).

Procedure

Infants watched dynamic videos of a woman speaking directly to them on a Tobii T60 eye-tracker. The current study utilized a McGurk paradigm by presenting four video types up to 30 times each: 1)congruent-ba (Visual /ba/, Audio /ba/ or VbaAba), 2)congruent-ga (VgaAga), 3)incongruent-impossible (VbaAga), and 4)incongruent-McGurk (VgaAba).

Analysis

There were two areas of interest (AOIs): eyes and mouth. We calculated proportions by dividing the total amount of time looking at each AOI by the total amount of time looking at any portion of the face. We performed ANOVAs with condition (VbaAba, VgaAga, VbaAga, VgaAba) and AOI (eyes, mouth) as within-subjects factors and group (LR, HR) as a between-subjects factor.

Results:

There was a main effect of condition (F(3,195)=25.40, p<.001) with infants looking more towards the face in the congruent-ga and incongruent-McGurk conditions. There was a main effect of region (F(1,65)=6.46, p=.013) with infants looking more towards the mouth than the eyes. There was a condition by region interaction (F(3,195)=17.38, p<.001) that was driven by infants looking more towards the mouth in the congruent-ga and incongruent-McGurk conditions. There were no significant group differences. Moreover, preliminary analysis comparing LR, HRA, and HRS groups yielded similar results, including no group differences (see Figure 1).

Conclusions:

We found that LR and HR infants showed similar gaze patterns when watching videos of a woman speaking directly to them. All infants looked more towards the mouth, especially when the visual articulatory cues were less clear (/ga/ versus /ba/). While this work focuses on group level analysis, the data do show individual variability with some infants looking more towards the eyes while others attend more to the mouth. With larger sample sizes, future work should continue to investigate individual variability as well as group differences including distinguishing between infants who later develop ASD from those who are simply at risk for ASD.