International Meeting for Autism Research: Virtual Reality Methods for the Study of Talking and Looking Behavior In People with High Functioning Autism (HFA)

Virtual Reality Methods for the Study of Talking and Looking Behavior In People with High Functioning Autism (HFA)

Friday, May 13, 2011
Elizabeth Ballroom E-F and Lirenta Foyer Level 2 (Manchester Grand Hyatt)
11:00 AM
N. V. Hatt, W. Jarrold, M. V. Gwaltney, N. McIntyre, M. Solomon, S. Ozonoff, K. Kim, B. E. Seymour and P. C. Mundy, MIND Institute, UC Davis, Sacramento, CA
Background: Virtual reality (VR) may enable a novel means of studying variables relevant to social attention in HFA because looking behavior is not simply a dependent measure as in traditional studies.  Rather it can interactively affect experimental phenomena.  Social attention in school-age populations is relatively under-studied in comparison with younger children.  It involves coordinating visual orienting to multiple social partners while speaking to them.

Objectives: To address the challenge of measuring associated behavioral variables we have computationally analyzed visual orienting (a.k.a. “looking behavior”) and speech produced during a social VR task.  We propose to provide an interactive demonstration of this VR task.

Methods: Participants were children aged 8-16 years with HFA (n= 20) and matched typically developing (TD) controls (n = 20).  The task required participants to respond to questions while looking at 9 avatars in a virtual classroom. Looking behavior was measured via head-tracker telemetry. Social attention was defined as the Total Number of Looks to avatar head regions.  In the Cued Condition avatars became translucent if they did not receive attention, but became opaque again once fixated.  In the Non-Cued condition avatars remained opaque. Speech during the task was transcribed and from that word count and frequency of dysfluencies (e.g. "uh", "um", etc) was computed.

Results: Results revealed no significant differences between the preadolescent (8-11 years) HFA and TD groups who displayed comparable rates of Social Attention or the Number of Looks to Avatars across 2 three minute trials (106.8 vs. 98.3 Looks respectively).  In contrast, there was a robust difference between the HFA and TD adolescents groups (12-16 years), F (1,14) = 6.28, eta² = .31 (105.4 vs. 144.5 Looks respectively). Thus, similar to O’Hearn et al. (2010) results revealed typical adolescent advances in social attention capabilities not seen in the Adolescent HFA sample. Additionally, HFA children with higher ADHD scores (T-score > 75) displayed significantly lower Social Attention compared to all other subgroups (82.93 looks, F = 6.15, eta² = .20).

Speech measures were reliable across cuing conditions for word count and dysfluency frequency (r = .85, r = .71 respectively). Word count was not significantly correlated with dysfluencies suggesting the two distinct behavioral dimensions.  Contrasting Cued versus Non-Cued conditions revealed decreased word count (paired t(38) = -2.04, p = 0.048) and increased dysfluencies (paired t(38) = 1.97 p = 0.057)  in the Cued condition, consistent with the hypothesized increased cognitive load of that condition.

Conclusions: Social attention in HFA was poorer than TD but only in the older age group suggesting a late onset developmental disturbance in HFA. Our VR Social Attention measure is associated with a standard attention measure. Additionally, a hypothesis that increased cognitive load of Cued condition would manifest in terms of increased dysfluency frequency and decreased word count was supported.  In sum, VR-based social orienting and speech analytic measures were associated in reliable and/or meaningful ways with independent clinical and VR cuing variables.

| More