A Novel Electrophysiological Marker of Autism Spectrum Disorder Based on Facial Expression Mental Imagery

Thursday, May 11, 2017: 5:30 PM-7:00 PM
Golden Gate Ballroom (Marriott Marquis Hotel)
M. Simoes1,2, R. Monteiro3, J. Andrade3, S. Mouga3,4, P. Carvalho2, G. Oliveira3,4,5 and M. Castelo-Branco6, (1)IBILI - Institute for Biomedical Imaging and Life Sciences, Faculty of Medicine – University of Coimbra, Portugal, Coimbra, Portugal, (2)Center for Informatics and Systems, University of Coimbra, Coimbra, Portugal, (3)Institute for Biomedical Imaging and Life Science, Faculty of Medicine, University of Coimbra, Coimbra, Portugal, (4)Unidade de Neurodesenvolvimento e Autismo, Pediatric Hospital, Centro Hospitalar e Universitário de Coimbra, Coimbra, Portugal, (5)University Clinic of Pediatrics, Faculty of Medicine, University of Coimbra, Coimbra, Portugal, (6)CIBIT & IBILI - Institute for Biomedical Imaging and Life Sciences, Faculty of Medicine – University of Coimbra, Portugal, Coimbra, Portugal

The diagnosis of autism spectrum disorder (ASD) is based on behavioural assessment by multidisciplinary specialized teams. Therefore, it is not free of subjective bias and there is a need for specific ‘biological markers’ (or ‘biomarkers’). Several biological characteristics of the disorder have been recently identified, especially in the field of functional genomics. However, they only cover a small percentage of the cases (between 15 to 20%). Some studies addressed neural coherence deficits to propose EEG-based classifiers. Here, we assessed the possibility of using face processing specific metrics to achieve neurophysiological discriminations.


To create a discriminant classifier between ASD and typically development (TD) individuals capable to automatically identify if a new individual belongs to the ASD or TD group, using electroencephalography (EEG) data of mental imagery of facial expressions.


Participants with ASD (n=17) and TD controls (n=17), matched by age and performance intelligence quotient, underwent a mental imagery task of happy vs. sad facial expressions in a virtual avatar while recording EEG from 58 scalp locations. The experimental design consisted of visualizing the avatar performing a dynamic facial expression (happy or sad) and then, after an auditory cue, imagining the avatar performing it again.

EEG data were preprocessed in order to remove bad channels and segments and to correct for artifacts. Thereafter, a group of features from time, frequency and non-linear domains were extracted for each electrode and for 7 frequency bands: theta, alpha, beta, 3 beta sub-bands and low-gama.

We conducted feature selection on the data in order to identify the clusters of electrodes and frequency bands that better discriminate the groups. Then, the best subset was used following a leave-one-out cross-validation approach. Training data were transformed through a Principal Component Analysis and the first four components were used to train the linear support vector machine. The same transformation was applied on the test set and then we measured the classifier accuracy. This analysis was paired with resting-state data in order to assess if results are task-specific or related to the ongoing EEG activity (in a baseline of neutral faces).


Feature selection showed that the best electrode clusters were located in right Fronto-Temporal, right Centro-Parietal, left Centro-Parietal, and right Parieto-Occipital regions. Best discriminating frequency bands were theta, high-beta and low-gama. Our classifier achieved 88.2% accuracy, 94.1% specificity and 82.4% sensitivity using just 4 principal components. Results with resting-state data (neutral face baseline) presented only 73.5% accuracy, 70.5% specificity and 76.5% sensitivity.


Our results suggest that is possible to use EEG data from facial imagery to discriminate individuals with ASD and TD. The most discriminating cluster locations are coincident with the face perception network and the high accuracy achieved by the classifier suggests that the impairments of the ASD group in facial expressions processing and their mental imagery represent a robust biological phenotype. Results from resting-state data suggest some differences are present on ongoing activity, even when only a neutral face is present, but the accuracy increases significantly when subjects have to perform explicit imagery.