19818
Revealing Sub-Categorization Strategies Used By Children with Autism Spectrum Disorders to Decode Facial Expressions of Emotion

Friday, May 15, 2015: 5:30 PM-7:00 PM
Imperial Ballroom (Grand America Hotel)
K. Ainsworth1, D. R. Simmons1, O. Garrod1, I. Delis1, B. Heptonstall2, P. Schyns1 and J. Tanaka3, (1)The University of Glasgow, Glasgow, United Kingdom, (2)University of Victoria, Victoria, BC, Canada, (3)Psychology, University of Victoria, Victoria, BC, Canada
Background:  

The ability of individuals with Autism Spectrum Disorder (ASD) to accurately recognize expressions of emotion from faces has attracted a large amount of research over the past 30 years (Harms, et al., 2010, Neuropsych. Rev., 20, 290-322), a large number of which propose a deficit in the perception of dynamic facial expressions in this group. However, the assertion that ASD emotion research has been “slow and confusing” because “the methodology lags woefully behind the questions we would like to ask” (Frith, 2003, Blackwell Publishing) still chimes with the current state of the research in this field. 

Objectives:  

We apply a novel methodology to answer fundamental questions about facial expression recognition (FER) in ASD, which have yet to be satisfactorily answered. In particular we: 

  1. Reveal detailed information about the facial components utilized in FER
  2. Decode specific timing parameters exploited for FER
  3. Determine both 1. and 2. at multi-level emotion decoding, i.e. including sub-categories of each emotion (e.g. ‘subtle’, ‘flirtatious’ or ‘overjoyed’ as sub-categories of ‘happy’)

Methods:  

Sixteen children with ASD viewed a total of 19,000 dynamic facial stimuli [http://www.psy.gla.ac.uk/~kirstya/emotions/example_stim2.html]. The stimuli were produced using a Generative Face Grammar (GFG; Yu, et al., 2012, Comput. Graph., 36, 152–162) where, on each trial, the GFG randomly selects a set of action units (AUs; Ekman & Friesen, 1978, Consulting Psychologists Press) from 41 possible AUs and 6 temporal parameters. By combining these parameters, a random, but physiologically plausible, facial animation is produced. Each child categorized the ‘random’ stimuli as being happy or angry (yes/no). Ultimately, the children used their own subjective understanding of what ‘happy’ and ‘angry’ represent to produce subjectively driven models of these expressions.

The data were analyzed used a non-negative matrix factorization (NMF): a dimensionality reduction technique that uses the group’s responses to identify a number of ‘categorization strategies’. By ‘strategies’ we mean the sub-categories of facial expressions and we determine their number by fitting them to the presented facial animations for each emotion and group. The trials were averaged prior to the NMF analysis to give the average response across participants for each trial. We then render the NMF output back into face models to create results that are easily visualized. 

Results:  

The face models can be viewed here [https://db.tt/ZnioWX83], where the rows depict ‘happy’, ‘non-happy’, ‘angry’ and ‘non-angry’ respectively. These results indicate that atypical AU components and timing parameters are utilized for facial expression perception in children with ASD. Five sub-categories were revealed for the ASD group’s categorization of ‘happy’ and ‘non-happy’ but only three emerged for ‘angry’ and ‘non-angry’.

Conclusions:  

Here we present a novel methodology to better understand facial expression recognition in ASD. The findings indicate that children with ASD utilize several sub-categories of happy and angry with notable atypicalities in AU use and timing in many of these. We reveal AU and timing information not only for monosyllabic emotion categories but for specific, subjectively driven sub-categories of emotions: a novel finding which is unique to this method.