Neural Correlates of Environmental Sound and Emotional Semantic Integration in Children with Autism

Thursday, May 17, 2012
Sheraton Hall (Sheraton Centre Toronto)
10:00 AM
J. McCleery1, V. Vogel-Farley2, C. Stefanidou3, S. Utz3 and C. A. Nelson4, (1)Edgbaston, University of Birmingham, Birmingham, United Kingdom, (2)Labs of Cognitive Neuroscience, Children's Hospital Boston, Boston, MA, (3)School of Psychology, University of Birmingham, Birmingham, United Kingdom, (4)Laboratories of Cognitive Neuroscience, Children's Hospital Boston/Harvard Medical School, Boston, MA
Background: Previous behavioural and neuroimaging studies have found evidence that children with autism spectrum disorders (ASD) have difficulties with semantic processing, with particular deficits in verbal comprehension.  By studying the semantic integration of word and environmental sound information, we recently uncovered evidence that these semantic processing deficits may be specific to the verbal versus nonverbal domain.

Objectives: To examine the semantic integration of environmental sounds compared with the semantic integration of emotional information in faces and voices, in children with autism.

Methods: Participants were fifteen 3- to 6-year old high-functioning children with ASD and fifteen typically developing control children, matched on chronological age, developmental age, and gender. We recorded event-related potentials (ERPs) while the children observed pictures of instruments (drums, guitars) followed shortly by matching and mismatching nonverbal sounds (drum sounds, guitar sounds), and while they observed pictures of emotional faces (happy, fearful) and matching and mismatching voices (happy voice, fearful voice).  Face stimuli were from the MacBrain standardised emotional expression dataset, and emotional voice stimuli (nonsense words “gopper sarla”) were shown to be accurate representations of happy and fearful emotional prosody through a comprehensive rating study involving six different emotion types with twelve typically developing adult participants.  We analysed two ERP components involved in semantic and cognitive integration, the N400 and the Late Positive Component (LPC).

Results: An analysis of variance (ANOVA) including matching and hemisphere as within-subjects factors and participant group as a between-subjects factor revealed a main effect of match for the LPC component for the environmental sounds condition, whereby the amplitude of this component was larger for mismatching than for matching stimuli (p < 0.01).  This match/mismatch effect was significant for the ASD group alone (p = 0.035), and exhibited a similar but non-significant trend in the control children alone (p = 0.10; match x hemisphere p = 0.09).  No main effects or group interactions were observed for ANOVAs on the environmental sounds N400 component or for either the N400 or LPC component during the emotional face/voice integration condition.  Furthermore, neither group exhibited any significant match/mismatch effects for the emotional face/voice integration condition.

Conclusions: From these results, we conclude that the automatic semantic integration of nonverbal, environmental sound information is intact in children with autism.  Because neither group of children exhibited semantic integration effects for emotional face/voice pairs, we were unable to assess emotional integration effectively in children with autism in the current study.  Future research might use emotional stimuli that come from emotional categories that are more dissimilar to one another, such as happy and disgust faces and voices.

See more of: Neurophysiology I
See more of: Neurophysiology
See more of: Brain Structure & Function
| More