Comparing Intensity Ratings of Emotions in Music and Faces By Adolescents with Autism Spectrum Disorder

Poster Presentation
Friday, May 11, 2018: 5:30 PM-7:00 PM
Hall Grote Zaal (de Doelen ICC Rotterdam)
H. Dahary1, S. Sivathasan2 and E. M. Quintin2, (1)McGill University, Montreal, QC, Canada, (2)Educational & Counselling Psychology, McGill University, Montreal, QC, Canada
Background: Individuals with autism spectrum disorder (ASD) often demonstrate difficulty in processing basic emotions in faces particularly with specific negative emotions, including sadness (Boraston et al., 2007) and fear (Humphreys et al., 2007). However, little research has compared emotion processing of faces to that of other modalities. Music is a powerful emotional vehicle (Juslin & Sloboda, 2001), an area in which individuals with ASD often show great interest and skills (Heaton, 2009), and is thus an alternative (and potentially preferred) domain for measuring emotion processing. Further, studies on emotion recognition with low-functioning adolescents are virtually non-existent which limits the applicability of findings to individuals with varying cognitive abilities.

Objectives: The purpose of this research is two-fold: 1) To directly compare intensity ratings of music-evoked and facial expressions of emotions, and 2) to extend the applicability of findings to children with varying levels of cognitive functioning.

Methods: Twenty-three participants aged 12 to 16 with ASD and low to high WISC-V Verbal IQ (50-111) completed three emotion recognition (ER) tasks: A Music ER task and two Facial ER tasks (1.Face Only ER task, 2.Combined ER task). Across the three tasks, the participants identified and rated the intensity of emotions (i.e., happy, sad, or fearful) in music excerpts (Music ER task) and in faces (Facial ER tasks). The Face Only ER task presented each face without the accompaniment of music, while the Combined ER task presented each face simultaneous to music that evoked the same emotion presented on the face. Each ER task included 18 trials (6 trials per emotion) of 4 seconds in length and took about 2-3 minutes to complete.

Results: Participants with lower cognitive ability (VCI: ≤80; n=12) rated emotions more intensely than those with higher cognitive ability (VCI: >80; n=11) across Music and Facial ER tasks (p < .01). However, cognitive ability did not have a significant effect on intensity ratings of specific emotions and intensity ratings of emotions within any of the three ER tasks (p < .05). Across participants, a main effect of emotion revealed that happy and fearful were rated more intensely than sad (p < .01). A marginal main effect of task showed that participants rated emotions more intensely in the Combined ER tasks than in the Music ER task (p = .07). A two-way interaction between task and emotion was also found such that in the Music ER task, fearful and sad were rated more intensely than happy, but in the Combined ER task, the reverse was found (p < .05).

Conclusions: Adolescents with ASD with lower cognitive ability appear to be more sensitive to emotions presented in faces and music than adolescents with ASD with higher cognitive ability. Higher intensity ratings for arousing emotional stimuli (happy and fearful) may support previous accounts of atypical development or connectivity of limbic brain areas including the amygdala (Baron-Cohen, 2000). Findings provide important implications for using targeted music interventions that capitalize on disorder-specific strengths (musical ability) to teach emotion processing skills to individuals with ASD.