30696
Examining How Children with and without ASD Extract Emotion from Prosody

Poster Presentation
Saturday, May 4, 2019: 11:30 AM-1:30 PM
Room: 710 (Palais des congres de Montreal)
N. E. Scheerer1, F. Shafai2, R. A. Stevenson3 and G. Iarocci1, (1)Simon Fraser University, Burnaby, BC, Canada, (2)The University of Western Ontario, London, ON, Canada, (3)Western University, London, ON, Canada
Background:

Individuals with Autism Spectrum Disorder (ASD) have difficulty perceiving and expressing emotions. Since prosodic changes in speech (i.e. changes in intonation, stress, rhythm, etc.) are crucial for extracting information about the emotional state of the speaker, an inability to perceive and interpret these prosodic changes may lead to impairments in social communication.

Objectives:

Previous work investigating individuals with ASD’s ability to identify the affective intentions conveyed by changes in speech prosody have been complicated by the fact that experimental stimuli often carry affective value in both their prosodic and their semantic content. The objective of this study was to use non-verbal affective sound-clips to determine whether children with ASD have difficulty extracting affect from changes in prosody. This research also explored whether a difficulty extracting affective intent from changes in prosody may be related to social competence.

Methods:

Children with (n=26) and without (n=26) ASD between the ages of 6 and 13 years listened to short non-verbal affective sound-clips and were required to match the emotion expressed in the noise burst to either an emotional face, or an emotion word. Affect bursts were obtained from the Montreal Affect Voices database, while faces were obtained from the Karolinska Directed Emotional Faces database. The Multidimensional Social Competence Scale (MSCS) parent report was used to measure social competence.

Results:

A 2 (Stimulus type: Face, Word) by 2 (ASD: Yes, No) repeated measures analysis of variance was conducted on matching accuracy. There was a main effect of stimulus type, F(1,50)= 71.10, p< .001, n2= .587, as accuracy was higher when affect bursts were matched to words (M= 84.86, SE= 1.30), relative to faces (M= 73.24, SE= 1.30). The stimulus type by group interaction was also significant, F(1,50)= 4.630, p= .036, n2= .085, as children with, (M= 84.78, SE= 1.84), and without, (M= 84.94, SE= 1.84), ASD performed similarly when matching the affective sound-clips to words, but the children with ASD, (M= 70.19, SE= 1.84), were less accurate than children without ASD, (M= 76.28, SE= 1.84), when matching the affective sound-clips to faces. Accordingly, MSCS scores were positively correlated with face, (r(50)= .281, p= .048), but not word, (r(50)= .023, p= .874), matching accuracy.

Conclusions:

Children with and without ASD accurately matched affective noise bursts to emotion words, suggesting children with ASD can accurately extract the affective meaning conveyed by changes in prosody. Children with ASD were less accurate at matching the noise bursts to the emotional faces, suggesting that children with ASD struggle to make use of this information in a social context. Given that affect-face matching accuracy was correlated with social competence, the inability to integrate social information derived from a speaker’s voice and face may interfere with effective social communication. Future research will explore whether this difficulty reflects a difficulty in extracting the affective meaning from faces, or whether it may represent a difficulty integrating emotional information from multiple modalities.

See more of: Emotion
See more of: Emotion