17804
Assessing Language in School-Aged Children with ASD in a Virtual, Public Speaking Task

Friday, May 16, 2014
Atrium Ballroom (Marriott Marquis Atlanta)
S. Torabian1, N. Alpers2, L. Naigles3, N. S. McIntyre4, T. Oswald5, L. E. Swain-Lerro4, S. Novotny6, T. Kapelkina7 and P. C. Mundy8, (1)Human Development, University of California Davis, Davis, CA, (2)University of Connecticut, Storrs, CT, (3)Psychology, University of Connecticut, Storrs, CT, (4)School of Education, UC Davis, Davis, CA, (5)MIND Institute, UC Davis, Sacramento, CA, (6)Human Development, UC Davis, Davis, CA, (7)UC Davis, Davis, CA, (8)MIND Institute and School of Education, UC Davis, Sacramento, CA
  • Background: Higher-functioning children with autism (HFA) may display language on par with typical controls (TD) on standardized measures, yet not use language fluently in social contexts. The development of cost-effective paradigms to assess their language problems in realistic social contexts, though, is challenging.
  • Objectives: This study examined the validity of a virtual reality public speaking language assessment for use with school-aged children with HFA. Language use was assessed across conditions that varied in social and non-social context, as well as higher versus lower attention demand task conditions.
  • Methods: 150 children, age 8-16 are currently participating in a longitudinal study of attention and social, as well as academic development in children with HFA. Here preliminary data are presented on 13 HFA and 7 typically developing Age (11.6 years vs. 11.5 years) and IQ (104 vs. 108) matched controls. Children’s speech was audiotaped, transcribed, and then analyzed for four measures of dysfluency (‘um’, ‘uh’, false starts, repetitions) and six measures of language use (noun types and tokens, verb types and tokens, discourse marker (‘well’ ‘like’) types and tokens. Language was assessed in a virtual reality public speaking task, in which the children viewed a virtual classroom through a head-mounted display. They were asked to answer different questions about their interests and daily activities for while addressing 9 targets around a large table in the VR classroom. There were three 3-minute conditions: In the Non-Social Attention condition children talked to 9 “lollipop” shaped forms positioned to their left and right at the table;  in the Social Attention Condition children talked to 9 avatar “peers” to the left and right, and in the High-Demand Social Attention Condition they talked to 9 avatars that faded if children did not fixate them every 15 seconds.  
  • Results: Across the three conditions the HFA group tended to decrease in noun, verb, and discourse marker tokens use (from 89.2 to 68.8) whereas the TD group’s usage increased (84.5 to 91.0). ANOVA revealed a Condition by Diagnostic Group interaction that approached significance F (2, 9) = 3.357, p = .058.  In addition, the HFA group produced more false starts (7.9, SD = 5.55) and repetitions (6.4, SD = 5.27) respectively than the TD group (3.75, SD = 4.35; 2.75, SD = 3.06) and both groups tended to increase in dysfluencies across the conditions.  The HFA group produced more noun tokens (M = 33.8, SD = 6.36) than the TD controls (M = 31.2, SD = 8.13). However, the TD controls produced more verb tokens (M = 29.67, SD = 9.77) and discourse markers (M = 26.05, SD = 14.97) than the HFA group (25.8, SD = 4.8; 20.0, SD = 15.28 respectively).
  • Conclusions: Preliminary data suggested that the VR paradigm revealed that children with HFA produce more dysfluencies and fewer verbs and discourse markers than controls and that their atypical language use may be affected by social and high attention demand contexts. Additional data will be presented.