"I See What You're Saying." An fMRI Study of Speech-Gesture Integration in Autism and Typical Development

Thursday, May 17, 2012
Sheraton Hall (Sheraton Centre Toronto)
9:00 AM
S. Lee1, M. Melnick2 and L. Bennetto2, (1)University of Rochester School of Medicine & Dentistry, Rochester, NY, (2)University of Rochester, Rochester, NY
Background: Evidence suggests that non-verbal cues facilitate language comprehension and specific neural networks underlie our ability to integrate cues from multiple sensory modalities. Specifically, regions within the posterior division of the superior temporal gyrus/sulcus (pSTS) have been implicated in the integration of visual and auditory linguistic cues. Converging lines of work have also demonstrated that individuals with autism have difficulty integrating verbal and nonverbal information. However, little is known about the neural network subserving audiovisual (AV) integration in individuals with autism.

Objectives: In the current study, we used fMRI to characterize the neural network subserving speech-gesture integration in children with high functioning autism and children with typical development. In particular, we designed experiments to assess multimodal (i.e., speech and gesture) integration in the context of language comprehension. Furthermore, we examined the relationship between multimodal language processing and social functioning abilities.

Methods: Seventeen boys with autism spectrum disorder (ASD; ages 8-15) and 20 typically developing (TD) boys (ages 7-15) participated in this study. All participants were right-handed, native English speakers with normal hearing and visual acuity. In each of three conditions, participants were presented with a description of a shape followed by two pictures, and instructed to select the target shape using a button-box response system. The three conditions varied on whether the shape was described using speech (audio-only), gesture (video-only), or simultaneous speech and gesture (AV). EPI gradient echo sequences were acquired over 30 axial slices (TR=3.0s, TE=30ms, FOV=256mm, 4mm slices), and a MP-Rage high-resolution sagittal T1 structural image (TR=2.53s, TE=3.44ms, FOV=256mm, flip angle=7) was acquired for registration. Pre-processing and analyses of data were conducted using FSL.

Results: Analysis of behavioral data demonstrated that there were no significant differences in accuracy either across conditions, or between groups. Consistent with previously reported findings in typical adults, both TD and ASD groups demonstrated widespread activation of a network including auditory and visual cortices, frontoparietal regions, and in particular, pSTS in response to AV stimulation. For the TD group, direct comparison of the AV versus the two unimodal conditions yielded signal enhancement in pSTS and occipital cortex. The same contrast in the ASD group, however, showed exclusively occipital cortex enhancement. Moreover, the severity of social deficit in the ASD group was inversely associated with pSTS activity such that increased social impairment was associated with diminished BOLD signal in pSTS.

Conclusions: These results are consistent with prior evidence of pSTS involvement in speech-gesture integration in healthy adults, and demonstrate that pSTS operates as part of a dynamic network for AV integration by mid-childhood. Of particular interest, we demonstrated preservation of some functional response in pSTS of children with autism, despite significant differences in the pattern of network enhancement during AV integration. Overall, this study sheds light on the neural network subserving social communication, and demonstrates changes in network function associated with ASD. Differences in the development of a network for AV integration may play an important role in the development of social communication deficits characteristic of autism.

| More