Leveraging AAC Usage Patterns for Diagnostic Classification: A Proof of Concept

Thursday, May 11, 2017: 12:00 PM-1:40 PM
Golden Gate Ballroom (Marriott Marquis Hotel)
B. Li1, A. Atyabi2, Y. A. Ahn1, L. Boccanfuso3, J. Snider4 and F. Shic5, (1)Seattle Children's Research Institute, Seattle, WA, (2)Seattle Children’s Research institute University of Washington, Seattle, WA, (3)Yale University, New Haven, CT, (4)Yale Child Study Center, New Haven, CT, (5)Center for Child Health, Behavior and Development, Seattle Children's, Seattle, WA
Background: Augmentative and Alternative Communication (AAC) apps are widely used to facilitate communication and enhance language learning in individuals with disabilities such as autism spectrum disorder (ASD). However, to date, the utility and potential of the massive streams of usage data generated by these apps has been little explored from a data mining perspective.

Objectives: To use data mining techniques to generate a novel feature representation and analysis approach that targets differences in AAC usage patterns between users with and without ASD.

Methods: The data used in this study represents the key presses of 189 users (81 ASD, 30 TD, and 78 with aphasia, language disorder, learning disability, or other disorders) over several sessions with FreeSpeech, an iPad AAC application that provides audio when users select pictures. The user’s keypresses are categorically sorted and each key is given a unique identifier in the range of its associated category. Using a sliding window of 20 keypresses with 75% overlap, usage data is segmented and each segment is considered as a standalone sample. Random Forest (RF) and Linear-Support Vector Machine (Linear-SVM) algorithms are used to distinguish ASD and non-ASD users based on 20 keypress usage patterns. 10 repetitions of K-fold-cross-validation (k=10) with 0.9 and 0.1 ratios for training and testing is considered. Accuracy and Cohen’s Kappa were compared for models’ performance.

Results: 12882 sessions of 20 keypresses are extracted from the 189 users (ASD=8567 sessions, non-ASD=4315 sessions). Chance-level performance is 50% and naive constant models achieve 66.5% classification accuracy by assigning every session to ASD. Linear-SVM achieved slightly above chance performance (68.27% accuracy and 0.17 Cohen’s Kappa). RF achieved 79.33% accuracy (recall=0.85,precision=0.84), 0.53 Cohen’s Kappa, and chi-square 3709.3 (p<.001). Two additional window sizes of 10 and 15 keypresses were also considered for RF, achieving 74.54% and 74.18% classification accuracy, respectively. Slightly better performances achieved by window-size 20 likely indicate that users’ behaviors patterns get distorted when shorter number of keypresses are considered.

Conclusions: We investigated the informativeness of a feature representation mechanism that converts nonstationary keypress recordings to stationary and sliding time-windowed-patterns. The feasibility of our feature representation method is evidenced by the good performance of RF approach. More comprehensive data collection and classification model training will allow for the generation of more robust and accurate models that can be coupled with the application for real time analysis of user usage pattern with ability to provide adaptive content potentially based on usage patterns.