Spoken Language in School-Aged Children with ASD and ADHD in a Virtual, Public Speaking Task

Friday, May 15, 2015: 11:30 AM-1:30 PM
Imperial Ballroom (Grand America Hotel)
N. S. Alpers1, S. Torabian2, N. S. McIntyre3, L. Naigles4 and P. C. Mundy5, (1)University of Connecticut, Willimantic, CT, (2)University of California Davis, Los Altos Hills, CA, (3)School of Education, UC Davis, Davis, CA, (4)University of Connecticut, Storrs, CT, (5)MIND Institute and School of Education, UC Davis, Sacramento, CA
Background: Standardized language measures for Higher-Functioning children with autism (HFA) and Higher-Functioning children with ADHD may be on par with their typically developing peers (TD); however, the language that they produce seems qualitatively different, particularly in their ability to communicate effectively and in a social context (Losh & Capps, 2003). In the current study, we investigate the particulars of these group differences, comparing the more obvious measures of noun and verb usage and sentence complexity with more subtle indicators such as discourse markers,   which not only ‘glue’ conversational turns together (e.g.,‘so’, ‘like’, ‘uh’, ‘um’; Shiffrin, 1988) but also shape and uniquely characterize how a person speaks.

Objectives: This study examined children’s language during a virtual reality public speaking task, designed for use with school-aged children with HFA. Language use was assessed across conditions that varied in social and non-social context, as well as higher versus lower attention demands.

Methods: 150 children aged 8-16 are currently participating in a longitudinal study of attention, social, and academic development in children with HFA. Here preliminary data are presented on 12 HFA, 15 TD, and 14 ADHD Age- (Ms = 11.85, 12.33, 11.96, respectively) and Verbal IQ- (Ms= 99, 108, 93 respectively) matched participants. Children viewed a virtual classroom through a head-mounted display and were asked to answer questions about their interests and daily activities while addressing 9 targets. There were three 3-minute conditions: In the Non-Social Attention condition children talked to 9 lollipop-shaped “targets” positioned to their left and right in the classroom; in the Social Attention Condition children talked to 9 avatar “peers” to the left and right, and in the High-Demand Social Attention Condition they talked to 9 avatars that faded if children did not fixate on them every 15 seconds.  Children’s speech was audiotaped, transcribed, and analyzed for seven measures of language use (Mean Length of Utterance (MLU), noun types and tokens, verb types and tokens, discourse marker types and tokens).

Results: ANOVAs revealed no significant group effect of MLU; however, children with ADHD and HFA tended to produce shorter utterances than TD children (Ms=  9.6, 11.23, 16.38 words respectively), F(2, 38)= 2.976, p = .063. Groups differed significantly in verb token production F(2, 38)= 4.147, p = .023 (see Figure 1); the TD group produced significantly more verb tokens than the HFA group especially during the High-Demand Social Attention Condition (p<.05). In addition, across groups total discourse marker use varied by phase, F (1, 38) = 14.193, p = .001, η²= .272, with children producing more discourse markers during the phases with greater cognitive load (see Figure 2).

Conclusions: The VR paradigm reveals subtle language differences between groups: children with HFA produced fewer verbs than controls, possibly because their replies were generally terse.  Moreover, their language use appears to be affected by social and high attention demand contexts. Future analyses will compare the specific discourse markers used most frequently—or avoided—by each group.