Recognition of Music-Evoked Emotions Among Adolescents with Autism Spectrum Disorder: Examining the Effect of Musical Excerpt Duration and Relationship with Cognitive Skills.

Poster Presentation
Friday, May 11, 2018: 5:30 PM-7:00 PM
Hall Grote Zaal (de Doelen ICC Rotterdam)
T. Fernandes1, H. Dahary2, S. Sivathasan1 and E. M. Quintin1, (1)Educational & Counselling Psychology, McGill University, Montreal, QC, Canada, (2)McGill University, Montreal, QC, Canada
Background: Individuals with autism spectrum disorders (ASD) show impairments recognizing emotions conveyed in facial expressions and speech, but can recognize music-evoked emotions comparably to individuals with typical development, suggesting a strength within the musical domain. Music is a dynamic stimuli, which can convey different emotions over the course of minutes and even seconds. Yet, the potential confound of length of exposure to a melody on music-evoked emotion recognition has not been investigated within the context of ASD. We investigate this issue considering cognitive skills and ASD symptomology.

Objectives: We examined the effects of 1) stimuli (musical excerpt) exposure duration, 2) cognitive skills (verbal and visual spatial), and 3) ASD symptomology on the performance (accuracy and reaction time) of adolescents with ASD on music-evoked emotions recognition tasks.

Methods: Twenty-one participants with ASD (mean age = 14.14 years) completed two music-evoked emotions tasks of varying durations. The long music task (LMT) consisted of 15 excerpts with a mean duration of 37 seconds while the short music task (SMT) comprised of 18 excerpts with a duration of 4 seconds. In both tasks, the participants were asked to identify the emotion that best describes the music (happy, sad, or fearful). Participants completed the Verbal Comprehension Index (VCI) and Visual Spatial Index (VSI) of the WISC-V and their teachers completed the Social Responsiveness Scale (SRS-2). Participants were divided into groups based on a median split of Low and High WISC-V scores (VCI : 80; VSI : 95).

Results: An examination of emotion recognition accuracy revealed ceiling effects for the LMT (M=94%, SD=10% ) and SMT (M=92%, SD=10%). A repeated-measures ANOVA with response time as an outcome variable, task and emotions as within subject repeated factors, and VCI group as a between subject factor revealed significant main effects of task (p <.01) and emotions (p = .03). Participants were faster in identifying emotions in the SMT compared to the LMT and in identifying happy vs. fearful music-evoked emotions. A significant interaction effect between task and group (p = .01) revealed that the Low VCI group was slower at identifying emotions in the LMT but were comparable to the High VCI group in the SMT. The same pattern of results was found with VSI group as a between subject factor. ASD symptomology did not have an effect on response times.

Conclusions: Results indicate that adolescents with ASD can accurately recognize music-evoked emotions irrespective of musical excerpt exposure duration. However, their speed at identifying emotions was differentially impacted by exposure duration and cognitive skills. These findings suggest that emotion processing and decision making among individuals with ASD and difficulty with verbal and visual-spatial skills may be facilitated by shorter exposure to emotions and potentially to other types of stimuli, which warrants further research.