26927
Automated Artifact Detection for EEG Data Using a Convolutional Neural Network

Poster Presentation
Friday, May 11, 2018: 10:00 AM-1:30 PM
Hall Grote Zaal (de Doelen ICC Rotterdam)
T. McAllister, A. Naples and J. McPartland, Child Study Center, Yale University School of Medicine, New Haven, CT
Background: Electroencephalography (EEG) is a valuable tool for studying Autism Spectrum Disorder (ASD). It provides a rich, temporally precise measure of brain activity that is inexpensive and appropriate for individuals of all ages and levels of cognitive ability. However, EEG is easily contaminated by artefactual signal generated by movement and muscle activity. It is necessary to identify and exclude artefactual data, and there is little consensus on methodology for its automated detection. Consequently, EEG, particularly in developmental and clinical populations, is still checked for artifact by hand, a time intensive and error-prone process. In other domains, such as high-level image recognition, convolutional neural networks (CNN) have been effective in automating complicated classification tasks and show promise for automatically classifying EEG artifact.

Objectives: We (1) develop a CNN to classify contaminated EEG collected from infants at normal and high risk for ASD; (2) assess its performance against human experts; and (3) assess its performance in classification between normal and high-risk infants to explore potential differences in artifact across groups.

Methods: Data collected in 118 EEG sessions of infants at normal (NR) or high-risk (HR) of developing ASD were split into event-related epochs and manually coded by a human expert as artefactual or normal. Epochs were converted into two dimensional arrays of amplitude by time across EEG channels, yielding 5834 artefactual and 5388 clean examples. These were split into a training set (N=8800) and validation set (N=2422) for a six-layer CNN built with the python library Tensorflow. Data were processed in two CNNs: the first downsampled (DS) data into 256 discrete values, the second rescaled (RS) data to range from (-100,100) µV.

Results: The CNNs were tested using the validation set of novel epochs. Clean epochs incorrectly labeled artefactual were considered false positive epochs (FPE) while overlooked artifacts were considered false negative epochs (FNE). Both the DS-CNN and RS-CNN classified EEG with approximately 80% accuracy. The DS-CNN had 14% FPE and 6% FNE. The RS-CNN had 4.6% FPE 16% FNE. However, performance was differential in the CNN network. When the DS-CNN was tested with only HR epochs, it was correct 80.6% of the time, and had 17% FPE. With only NR epochs, the DS-CNN was correct 76% of the time, and had 15% FPE. When the RS-CNN was tested with only HR epochs, it was correct 78.5% of the time, and had 4% FPE. When the RS-CNN was tested with only NR epochs, it was correct 80.5% of the time, and had 6% FPE.

Conclusions: Our results show that CNNs can accurately classify EEG artifact at rates approaching human expert performance in a fraction of the time and warrant further development. With different pre-processing or more specifically tailored networks, CNNs could be a valuable method of EEG artifact detection. Ongoing analyses of classification performance between groups will allow us to detect differential patterns of artifact by risk status allowing for advances in automated classification while generating insights into patterns of activity that differentiate groups based on risk status.