Attention to Speaker's Mouth Region in Toddlers Predict Language and Autism Severity Levels in Preschool Aged Children

Poster Presentation
Thursday, May 2, 2019: 11:30 AM-1:30 PM
Room: 710 (Palais des congres de Montreal)
K. Villarreal1, C. D. Gershman1, N. Powell1, H. Neiderman1, K. Joseph1, E. Yhang1, F. Shic2,3, Q. Wang1, S. Macari1, M. Lyons1, K. Chawarska1 and C. Nutor1, (1)Child Study Center, Yale University School of Medicine, New Haven, CT, (2)Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, (3)Pediatrics, University of Washington School of Medicine, Seattle, WA
Background: Although well documented that toddlers with ASD look less at faces (Guillon, 2014), atypicalities in attention to the eye and mouth regions is context dependent (Chawarska, 2012). While limited attention to the mouth in the second year of life may be detrimental to language acquisition (Tenenbaum, 2015), poor attention to the eye region may negatively impact social communication skills (Elsabbagh, 2013).

Objectives: To examine associations between proportion of time spent on eye (%Eye) and mouth (%Mouth) regions of interest (ROIs) of videotaped interactive partners in the 2nd year (Time-1) and, verbal skills and autism severity 1-2 years later (Time-2) in toddlers with ASD and typically developing (TD) controls. Based on prior work (Tenenbaum, 2015), we hypothesize that toddlers with higher Time-1 %Mouth will have higher verbal ability at Time-2 and toddlers with lower Time-1 %Eyes will show greater Time-2 autism severity.

Methods: Participants include 39 toddlers with ASD (Mean age=23.2 months, SD=3.2) and 36 with TD (Mean age= 21.5 months, SD=3.1). At Time-1, they were administered the Selective Social Attention 2.0 eye-tracking task (Figure 1) that included four conditions: (1) Direct Gaze Only (DG+SP-): actress looking at camera, silently (2) Speech Only (DG-SP+): actress looking down, speaking, (3) Dyadic Bid (DG+SP+): actress looking at camera, speaking, and (4) No Bid (DG-SP-): actress not speaking, looking down. For each condition, %Eyes %Mouth standardized total looking time was computed. Diagnostic assessment took place at 40.2 months (SD=3.56) (Time-2). Verbal skills were evaluated with the Mullen Scales of Early Learning (VDQ) and the Autism Diagnostic Observation Schedule-2 (ADOS-2) was used to quantify autism severity. Pearsons’ r correlation analyses evaluated relationships between variables of interest at both time points.

Results: Without speech (DG-SP-, DG+SP-), neither %Mouth nor %Eyes were associated with later outcomes (Figure 2). When speech was present (DG-SP+, DG+SP+), significant correlation was observed between Time-1 %Mouth and Time-2 VDQ (DG-SP+ r(72)=.35, p=.002) (DG+SP+ r(68)=.38, p=.001) (Figure 2a). %Mouth at Time-1 was also associated with lower autism severity at Time-2 (DG-SP+ r(50)=-.34, p=.013)(DG+SP+ r(47)=-.40, p=.004) (Figure 2b). However, Time-1 %Eyes was not associated with Time-2 outcomes in either condition (Figure 2b).

Conclusions: How toddlers monitor speaker’s faces predicts social and verbal functioning in preschool. Greater attention to the mouth was associated with better language outcomes and lower autism severity. These results are consistent with work in TD infants showing positive links between early mouth looking and later language skills (Tenenbaum, 2015). Both toddlers and older children with ASD show limited attention to the speaker’s mouth region (Shic, under-review), which in older children is linked with poor audiovisual speech integration and speech perception deficits (Irwin, 2017). These findings suggest diminished attention to salient audio-visual cues early in development has negative impact on audio-visual perception. Thus, in the second year of life, in contexts in which an interactive partner is speaking, looking at the eyes does not appear to have a similar adaptive advantage as it may have when monitoring gaze and facial communicative gestures when speech is absent (e.g., joint attention).