29593
Computer Vision Analyses of Social Coordination and Social Communication Deficits in Autism

Panel Presentation
Thursday, May 2, 2019: 10:55 AM
Room: 517A (Palais des congres de Montreal)
E. Sariyanidi1, K. Bartley1, C. J. Zampella1, A. de Marchena2, J. Pandey3, E. S. Kim3, J. D. Herrington1, B. Tunc1, J. Parish-Morris1 and R. T. Schultz3, (1)Center for Autism Research, The Children's Hospital of Philadelphia, Philadelphia, PA, (2)University of the Sciences, Philadelphia, PA, (3)Center for Autism Research, Children's Hospital of Philadelphia, Philadelphia, PA
Background: Reciprocal, coordinated behavior is a fundamental feature of conspecific interactions; birds flock, fish school, bees swarm, and humans deftly coordinate movements with partners during social interactions. This happens so routinely that when there are disruptions to interpersonal coordination, it is palpable. Despite the ease with which we sense difficulties with the fluidity of a social interaction, there are currently no tools that can reliably quantify the degree of coordination during an interaction in a highly granular and easily scalable manner, and no well-established quantitative methods for assessing group and individual differences in dyadic coordination.

Objectives: To develop robust quantitative methods for precisely assessing coordinated facial movements during social interaction; and to test whether such a measurement process can distinguish those with autism spectrum disorder (ASD) from typically developing controls (TD), and whether it can distinguish individual differences in social communication skill within the group with ASD, specific from restricted and repetitive behaviors (RRB).

Methods: Our primary sample consisted of 44 young adults, 17 with ASD and 27 TD. We tested the generalizability of the results in a replication sample of 30 adolescents, 17 with ASD and 13 TD. Both samples were matched on age, verbal IQ (normative range), and gender. Participants engaged in an unstructured, 3-minute “get to know you” conversation with an unfamiliar study team confederate. Confederates were instructed not to initiate topics and to not speak more than 50% of the time. Dyadic interactions were captured with a specially designed “TreeCam” with two synchronized HD video cameras pointing in opposite directions. Dyadic facial coordination was automatically quantified with a computer vision and machine learning analytic pipeline. Facial movements were captured as a set of 180 independent, regional “bases”, where bases represented time series of facial movements (e.g., corner of the mouth) for each person. Quantification of dyadic coordination between conversational partners involved windowed cross correlation between the partners’ time series. A machine learning framework (with nested, leave-one-out cross-validation; LOOCV) was designed to predict group membership (ASD vs. TD) and individual differences in ADOS-2 overall CCS, Social Affect (SA), and RRB scores. Only the dyadic features that predicted adult group membership were used in the replication sample.

Results: Classification (ASD vs. TD) accuracy was 88.6% (p<.0001; PPV=.93; NPV=.87) for the primary sample, and 86.7% (p<.0005; PPV=.88; NPV=.85) for the replication sample. Automated computer prediction in the primary sample was more accurate than that of expert (n=9; 87% vs. 82%) and non-expert (n=11; 87% vs 77%) study staff who made diagnostic judgements from the same dyadic videos (p’s<.001). Using the feature groups selected for classification, support vector regression with LOOCV predicted the ADOS-2 CSS in the primary sample (r=.57, p=.02) and the replication sample (r=.53, p=.03). As hypothesized, correlations were higher for SA scores than RRB scores (SA: .58 and .20, respectively; RRB: .00 and .06).

Conclusions: Automatic assessment of social coordination from brief videos of natural conversations promises to be an important new tool for autism research, which adds granularity and scalability to diagnostic and social communication assessment.