30720
Patterns of Emotion Recognition in Speech and Song Among Children with ASD: Investigating the Effects of Emotion and Intensity

Poster Presentation
Saturday, May 4, 2019: 11:30 AM-1:30 PM
Room: 710 (Palais des congres de Montreal)
T. Fernandes, J. Burack and E. M. Quintin, Educational & Counselling Psychology, McGill University, Montreal, QC, Canada
Background: The social communication profile characteristic of persons with ASD may be related to difficulties in inferring the emotional state of others from several components of social interaction including speech. Yet, researchers have not reliably found deficits in emotion recognition from speech, depending on the specific emotions and the intensity to which they are conveyed. In one example of a strength, people with ASD appear to be particularly able to recognize emotions from instrumental music. The extent to which this strength is generalizable can be seen if it is extended to song (vocal music).

Objectives: We compared emotion recognition from speech and song among children with ASD by examining the effects of specific emotions and emotional intensity.

Methods: Thirty children with ASD (age M= 11.67, SD = 2.28) completed a computerized task in which they identified emotions of varying intensity from spoken or sung sentences with neutral semantic content. The task comprised of 64 trials (2 stimuli conditions [speech and song] X 4 emotions (happy, angry, sad, scared) X 2 actors (1 male) X 2 statements X 2 intensity [normal vs. high]). The participants also completed the Verbal Comprehension Index (VCI) of the WISC-V and were divided into groups (N=15) based on a median split of a VCI score of 75.

Results: A repeated-measures ANOVA with emotion recognition accuracy as a dependent variable, stimuli condition (speech vs. song), emotion, and intensity as within subject repeated factors, and VCI group as a between subject factor revealed significant main effects of stimuli condition (p < .001), emotion (p < .001), and intensity (p < .001). The main effects were that accuracy was significantly better for speech compared to song and for emotions conveyed intensely compared to normally. Accuracy was highest for angry trials and lowest for scared trials across speech and song . A significant emotion X condition interaction (p < .001) was found as recognition of anger and fear was more accurate in speech than in song, while happiness and sadness were recognized as accurately in both conditions. A significant emotion X intensity interaction (p < .001) was found as recognition of happiness, anger and fear was more accurate for trials with high intensity, while recognition of sadness was more accurate for trials with normal intensity. The VCI group effect was not significant.

Conclusions: The results from this study demonstrate that children with ASD more easily recognize intensely conveyed emotions (except in the case of sadness) from speech compared to song. This suggests that their observed strength in recognizing emotions from instrumental music may not extend to song. The finding that fear was less accurately identified as compared to other emotions supports the amygdala theory of ASD, that they may show atypical connectivity of the amygdala, an area implicated in fear perception and response. These findings also have implications for interventions that extend beyond recognition of facial expressions and considers emotion intensity by first teaching emotions that are intensely conveyed followed by those that are more subtle.

See more of: Emotion
See more of: Emotion