International Meeting for Autism Research: Vocal Emotion Recognition In Autism Spectrum Disorders: When Psychoacoustics Meet Cognition

Vocal Emotion Recognition In Autism Spectrum Disorders: When Psychoacoustics Meet Cognition

Saturday, May 14, 2011
Elizabeth Ballroom E-F and Lirenta Foyer Level 2 (Manchester Grand Hyatt)
9:00 AM
O. Golan1, E. Globerson2, M. Lavidor1,3, L. Kishon-Rabin4 and N. Amir4, (1)Department of Psychology, Bar-Ilan University, Ramat-Gan, Israel, (2)Gonda Multidisciplinary Brain Center, Bar-Ilan University, Rama-Gan, Israel, (3)Gonda Multidisciplinary Brain Center, Bar-Ilan University, Ramat-Gan, Israel, (4)Department of Communication Disorders, Tel-Aviv University, Tel Aviv, Israel
Background:  Prosody is an important tool of human vocal communication. Prosodic attributes of speech affect our ability to recognize, comprehend, and produce affect as well as semantic and pragmatic meaning based on the intonation, stress, and rhythm patterns of vocal utterances. Vocal emotion recognition relies on successful processing of prosodic cues in speech that are interpreted according to predefined socio-emotional scripts. Individuals with Autism Spectrum Disorders (ASD) show deficiencies in prosodic abilities, both pragmatic and affective. Such deficiencies have been mostly related to cognitive difficulties in emotion recognition. Recently, we have demonstrated a strong association between vocal emotion recognition and lowed level auditory perceptual abilities in the general population. The current study evaluates this paradigm with individuals with ASD.

Objectives:  To evaluate the association between psychoacoustic abilities and prosodic perception in individuals with ASD, in comparison to controls from the general population.

Methods:  21 high functioning male adults with ASD and 32 male adults from the general population, matched on age and verbal abilities, and screened for normal hearing limits, undertook a battery of auditory tasks: psychoacoustic tasks, a pragmatic prosody recognition task (narrow focus recognition) and a vocal emotion recognition task. A facial emotion recognition task represented non-vocal emotion recognition abilities.

Results:  Individuals with ASD scored significantly lower than controls on vocal and facial emotion recognition, but not on the pragmatic prosody recognition task or on any of the psychoacoustic tasks. Psychoacoustic abilities were strong predictors of vocal emotion recognition in both the ASD and control groups, whereas facial emotion recognition abilities were a significant predictor of vocal emotion recognition only in the ASD group. In the clinical group, psychoacoustic and facial emotion recognition abilities together explained R2=57.5% of the variance of vocal emotion recognition scores

Conclusions:  Our results support previous findings of cross-modal emotion recognition difficulties in ASD. Furthermore, our findings suggest that lower level psychoacoustic factors and higher-level emotion recognition skills taken together may improve our understanding of vocal emotion recognition in ASD.

| More