31864
Cortical Processing of Audiovisual Speech Integration in Children with Autism

Poster Presentation
Thursday, May 2, 2019: 11:30 AM-1:30 PM
Room: 710 (Palais des congres de Montreal)
Y. Zhang1,2, T. K. Koerner3, C. Kao1, L. Yu1,4 and J. T. Elison5, (1)Department of Speech-Language-Hearing Sciences, University of Minnesota, MINNEAPOLIS, MN, (2)University of Minnesota Center for Neurobehavioral Development, MINNEAPOLIS, MN, (3)National Center for Rehabilatative Auditory Research, Portland VA Medical Center, Portland, OR, (4)Psychology, South China Normal University, Guangzhou, China, (5)University of Minnesota, Minneapolis, MN
Background: The ability to detect auditory-visual correspondence in speech is an early hallmark of typical language development. Infants are able to detect audio and visual mismatches for spoken vowels such as /a/ and /i/ as early as 4 months of age. Event-related potential (ERP) work in our lab on typically developing infants showed a clear N400 response in the centro-frontal electrodes in response to incongruent pairing of audiovisual vowel stimuli. In previous autism research, a deficit in audiovisual speech integration has been reported with the well-known McGurck Effect, which produces illusionary perception such as a fused /da/ percept from combining the auditory /ba/ sound with visual /ga/ articulation. But it remains unclear whether children with autism would demonstrate a similar deficit in audiovisual integration with a much simpler protocol of audiovisual congruency detection that uses vowel stimuli without involving the McGurck-type of fusion.

Objectives: The purpose of the present ERP study was to examine cortical processing of audiovisual speech integration in children with autism. We were particularly interested in identifying the potential neural markers of audiovisual integration deficit with a simpler congruency detection protocol.

Methods: Video clips of two naturally spoken vowels, /a/ and /i/, were digitally edited to create congruent and incongruent pairing conditions for the auditory and visual information. The EEG data were recorded from twelve children diagnosed with autism (5~8 years old) as well as age-matched controls. Randomized blocks of congruent and incongruent trials were presented at approximately 70 dB SPL. The EEG data were collected with a 64-channel ANT (Advanced Neuro Technology, the Netherlands) amplifier and shielded WAVEGUARD EEG cap. Offline EEG data were bandpass filtered (0.5~40 Hz), and further processed with independent component analysis for artifact attenuation. Trials beyond the ± 50 μV range were then rejected.The accepted trials were averaged for ERP waveform analysis. Minimum norm estimation was further conducted for source localization.

Results: Unlike the control group who showed a clear congruency effect in the ERP data, a subgroup of four children with autism did not appear to show this effect. The peak latency of the ERP responses for incongruency detection in the 5~8-year-old children were more adult-like whereas the polarity and scalp distribution was more infant-like. Minimum norm estimation results showed significant activities in the right superior temporal, right inferior frontal, and parietal regions for incongruency detection, which varied as a function of time window.

Conclusions: The ERP data indicate the existence of a continuum of ability to integrate audiovisual speech information at the cortical level among individuals with autism spectrum disorders. Further work with a larger sample size is needed for a better understanding of the individual differences in processing audiovisual speech in relation to language development.