30966
Cross-Modal Coordination of Face-Directed Gaze and Emotional Speech Production in Adolescents with ASD

Poster Presentation
Saturday, May 4, 2019: 11:30 AM-1:30 PM
Room: 710 (Palais des congres de Montreal)
T. Sorensen1, E. Zane2, T. Feng3, S. Narayanan3 and R. Grossman4, (1)University of Southern California, Los Angeles, NE, (2)FACE Lab, Emerson College, FACE Lab, Boston, MA, (3)University of Southern California, Los Angeles, CA, (4)CSD, Emerson College, Boston, MA
Background: The facial and vocal expressions of individuals with autism are often perceived as awkward (Grossman 2015, Faso et al. 2015, Sasson et al. 2017), which may be caused by atypical timing and synchrony of movements across face regions (Guha et al. 2016, Metallinou et al. 2013). Additionally, there are a range of findings for eyegaze to the face in ASD (Papagiannopoulou et al. 2014, Falck-Ytter & von Hofsten 2011). Despite an increased interest in cross-modal integration in ASD, there are no investigations to date into the relationship between speech production and emotional facial movements, nor whether face-directed gaze relates to facial-expression quality in ASD.

Objectives: In an emotional speech mimicry task, we predict that adolescents with ASD will make primarily lower-face movements to support speech production, but not make concurrent upper-face movements to express emotion. We also expect that adolescents who spend more time gazing at the face of a video model will produce more upper-face movements for emotional expressions.

Methods: Participants watched and mimicked videos of adolescents producing two-sentence combinations (a neutral sentence followed by an emotional one). We recorded acoustic measures of verbal speech (intensity, F0, etc.), facial motion capture (32 markers across the face), and eye-tracking data (dwell time to face) from 13 adolescents with ASD and 19 neurotypical (NT) adolescents. We used Granger causality to measure the strength of coordination between facial movements and acoustic measures: strong Granger causality indicates rigid speech-face dependence with few effective degrees of freedom, whereas weak Granger causality indicates independent control of the face and voice. We obtained Autism Quotient (AQ) scores as a continuous measure of autism features and used linear mixed effects models to analyze the relationship between AQ and (1) Granger causality between speech-face patterns and (2) face-directed gaze.

Results: Participants with more autistic features (i.e. high AQ scores) have greater cross-modal dependence than low-AQ adolescents (χ2(7)=1541.9, p<0.05). A linear mixed effects model with random slopes for AQ by motion capture marker demonstrates the effect of AQ is stronger for the lower face (lower cheek, mouth, chin) than for the upper face (eyes, eyebrows, forehead) (U=595, p<0.05).

AQ scores also interact with face-directed gaze. Specifically, the slope for net dwell time on any part of the face is steeper for children with high AQ (linear mixed effects model; all p<0.05): Heightened visual attention to the face results in greater speech-face dependence in high-AQ participants, but less dependence in low-AQ participants.

Conclusions: High-AQ adolescents have greater cross-modal dependence, suggesting they move all facial regions primarily for speech production but not for additional emotional expressivity in an emotional-speech mimicry task. In contrast, low-AQ participants produce more facial feature movements not directly related to speech, particularly in the upper face (e.g. eyebrow raises), which can transmit emotional expressivity without being tied to the rhythm of lower-face speech movements. Contrary to our prediction, the difference between high- and low-AQ participants is amplified by greater visual attention to an emotional face.