29856
Assessing Audio-Visual Integration in Speech in Minimally Verbal Young Children with Autism Spectrum Disorder

Poster Presentation
Saturday, May 4, 2019: 11:30 AM-1:30 PM
Room: 710 (Palais des congres de Montreal)
M. Kissine1, J. Bertels2, N. Deconinck3, G. Passeri4 and G. Deliens1, (1)ACTE — Center of research in Linguistics — ULB Neuroscience Institute, Université libre de Bruxelles, Brussels, Belgium, (2)Université libre de Bruxelles, Brussels, Belgium, (3)Paediatric neurology, Hôpital Universitaire des Enfants Reine Fabiola, Brussels, Belgium, (4)Child Psychiatry Service, Hôpital Universitaire des Enfants Reine Fabiola, Brussels, Belgium
Background: Poor integration of speech sounds with the mouth mouvements likely contributes to language acquisition deficits in young children with autism spectrum disorder (ASD). However, currently existing evidence of multimodal integration deficit in autism is either limited to older high-functioning, verbal individuals or implements preferential gaze paradigms that are used in infant research, but, as we show, are not optimal to investigate pre-scholers.

Objectives: We designed a Reinforced Preferential Gaze paradigm that overcomes biaises in the previous research and allows to test multimodal integration in young, non-verbal children with ASD.

Methods: Participants: 31 non- or minimally verbal children with ASD (35–72 months) and 44 TD children (36–72 months)

Video stimuli are limited to the mouth region and consist of a 5 sec recording of 3 identical consonant-vowel syllables, so that three clear articulatory mouvements can be easily mapped on three salient acoustic events, associated with the consonant. In each trial, the stimulus presentation phase is followed by a 1 s transition blank screen, after which starts a visually attractive 3 s reward animation. see The position of the reward can be anticipated only based on temporal alignment between the video and the audio components of the stimuli: for half of the children in each group (TD or ASD), the reward consistently appeared on the side of the in-synch video (Synchronous version) and, for the other half, the reward consistently appeared on the side of the out-of-synch video (Asynchronous version). Consequently, anticipative gaze towards the location of the reward during the transition phase is indicative of the capacity of temporally bind the acoustic and the video signals. Each participant was exposed to 30 trials on a screen equipped with a Tobii X2-60 eye-tracker.

Two areas of interest (AOIs) were designed and kept constant across the stimuli and the transition phases: Reward, corresponding to the exact zone where the rewarded stimuli was displayed and Non-reward, corresponding to the exact zone where the non-rewarded stimuli was displayed. Together, these two AOIs corresponded to 8.86\% of the total area of the screen. A third AOI, Other, corresponded to the rest of the screen and was used for the analysis of the transition phase. Every 16 ms, and each AOI, we extracted eye-tracking data indicating whether this AOI was active or not.

Results: Stepwise multilevel regressions, with item-per-trial by item and by-participant random slopes were used to analyse fixation curves in the stimuli and transition phases. During stimuli presentation, children's gaze is mostly influenced by the periodic and salient mouth mouvement, independently of group or temporal alignment. Both groups demonstrated a clear preference for the Reward AOI during the first half of the transition period in both versions, viz. independently of whether the reward phase primed either the aligned or the misaligned video. Children with ASD exhibited a lower rate of fixations on the Reward AOIs, displaying lower sensitivity to audio-visual alignment.

Conclusions: Relative to traditional preferential gaze paradigms, our method offers a clearer window on young children with ASD's difficulties in audio-visual integration.