Children with Autism Spectrum Disorders (ASD) exhibit functional impairments in communication and social interaction. Assessment of these characteristics generally relies on professional observation, but unfamiliar clinical settings coupled with the need to interact with one or more strangers may influence a child’s behavior during evaluation, potentially reducing the validity of the assessment. This problem can be eradicated by recent advancements in hardware and software technology that make it possible to collect a large volume of interaction data unobtrusively within a natural environment via digital audio recording, and to label and analyze these data automatically using speech recognition technology and statistical algorithms. Previously published research using such technology indicated that children with ASD differed from typically developing (TD) children on frequency of monologues and interactions with adults (Warren et al., 2009). Additional investigation showed that the frequency with which TD children initiate verbal interactions increases with age but found no such correlational patterns among children with ASD (Gilkerson, Richards & Xu, 2010).
Objectives:
The current study expands on prior work using these technologies to provide additional analyses of daily interaction patterns of children with ASD. Specifically, we examined child monologues (CMs) and child-initiated conversations (CICs) in relation to various symptom and performance measures. We also compared these interaction indicators during versus outside therapy sessions.
Methods:
The current sample includes 79 children 16-48 months of age diagnosed with autism (ASD). Participants wore a lightweight digital recorder that captured all their vocalizations and interactions throughout a 12-hour day, both in the naturalistic home environment and during therapy sessions. The included 383 recordings (4,596 hours) were processed using the LENA (Language Environment Analysis) framework to quantify child vocalizations and other details of the child’s language environment. The software automatically identified periods of single speaker vocalization (child and adult monologues) and conversational interactions (parent/child alternations bounded by >5 seconds of non-interaction).The current analysis examines these daily interactional patterns (CICs and CMs) in relation to SCQ, M-CHAT, BRIEF-P and CSBS scores, and to behavior during and outside of intervention.
Results:
Age-standardized CIC frequency correlated negatively with SCQ (r(75)=-.33,p<.01) and M-CHAT (r(77)=-.26,p<.05) totals and positively with all three CSBS Composites: Social (r(75)=.32,p<.01), Speech (r(75)=.59,p<.01), and Symbolic (r(75)=.43,p<.01). Age-standardized CIC duration correlated positively with two CSBS Composites: Speech (r(75)=.44,p<.01), and Symbolic (r(75)=.28,p<.05) and with the Emergent Metacognition Index of the BRIEF-P (r(31)=.48,p<.01). The Inhibit scale of the BRIEF-P correlated negatively with both age-standardized CM frequency (r(31)=-.37,p<.05) and duration (r(31)=-.44,p<.01), and CM duration correlated positively with the CSBS Speech Composite (r(75)=.30,p<.01). Children produced significantly more CICs per hour during therapy sessions than on non-therapy days (t(50)=4.65, p<.01).
Conclusions:
These results demonstrate that automated audio analysis can provide meaningful information about children’s daily interaction patterns within the natural home environment, and potentially can inform both evaluation and treatment of children with ASD. This presentation will discuss how automated analysis of conversation patterns could be used in a research and clinical capacity, both to measure intervention progress and as a supplement to assess symptom severity.
See more of: Treatments
See more of: Prevalence, Risk factors & Intervention