18620
Context Effects on Facial Affect Recognition in Autism and Schizophrenia

Friday, May 15, 2015: 5:30 PM-7:00 PM
Imperial Ballroom (Grand America Hotel)
K. E. Morrison1, J. R. Shasteen1, D. J. Faso2, A. E. Pinkham1 and N. J. Sasson2, (1)The University of Texas at Dallas, Richardson, TX, (2)University of Texas at Dallas, Richardson, TX
Background: Impairments in facial affect recognition for individuals with autism spectrum disorder (ASD) and those with schizophrenia (SCZ) are often reported based upon their reduced accuracy in the identification of emotional expressions presented in isolation. Real-world affect recognition, however, is informed by contextual factors that can modulate these inferences. For example, interpretation of a person’s emotional state will differ if he/she is crying at a wedding compared to at a funeral. Prominent cognitive theories (e.g., weak central coherence) have suggested that both ASD and SCZ are characterized by a reduced tendency to integrate contextual information, a bias that may affect emotional evaluation within real-world environments.

Objectives: The present study investigated the effects of congruent and incongruent emotional contexts on facial emotion recognition in ASD and SCZ.

Methods: Three groups not differing on estimated I.Q. participated: 44 adults with SCZ, 24 with ASD, and 39 nonclinical controls. Participants completed a novel "Emotions in Context" task in which emotional faces were displayed across three conditions: in isolation, within scenes with emotionally-congruent information, and within scenes with emotionally-incongruent information. In the congruent and incongruent conditions, faces were not superimposed on scenes or shown beside them, as has been done previously, but rather realistically integrated into the scene as the face of a primary character. Both behavioral (i.e., accuracy and response time) and eye-tracking data (e.g., percentage of fixation time on the face) were collected.  

Results: Across all three conditions, controls were significantly more accurate (p = .001) and faster (p = .024) than both clinical groups, who did not differ from each other. A group x condition interaction on accuracy (F (4, 206) = 3.90, p = .004) emerged, driven by accuracy improving between the isolation and congruent conditions for controls but not the clinical groups (Controls: p = .007; ASD: p = .380; SCZ: p = .572). In contrast, all three groups showed significant decreases in accuracy between the isolation and incongruent conditions (all ps < .02). Discrepancy in RT between the incongruent and isolation conditions was significantly higher in controls (M difference = .23s) than the ASD (M difference = -.39s; p = .023) and the SCZ groups (M difference = -.36s, p = .045). Estimated I.Q. was a significant predictor of accuracy on each condition for the SCZ group (isolation: r = .46; congruent: r = .41; incongruent: r = .50; all ps < .006), but this did not occur for controls or the ASD group.

Conclusions: Context effects were greater in the control group than both the ASD and SCZ groups. Complementary contextual information facilitated emotion recognition in controls only, and although all three groups demonstrated reduced accuracy in the incongruent condition, only controls showed an increase in processing time when presented with conflicting emotional information. Further, because I.Q. only predicted accuracy in the SCZ group, emotion recognition may rely upon general neurocognitive abilities in SCZ to a greater degree than in ASD. Analysis of eye-tracking data is underway and will be completed in time for the conference.