Note: Most Internet Explorer 8 users encounter issues playing the presentation videos. Please update your browser or use a different one if available.

Mebook: A Novel Device Using Video Self-Modeling to Enhance Literacy Among Children with ASD

Friday, 3 May 2013: 09:00-13:00
Banquet Hall (Kursaal Centre)
10:00
S. C. S. Cheung1, J. Shen2 and N. Uzuegbunam1, (1)Center for Visualization and Virtual Environments, University of Kentucky, Lexington, KY, (2)Center for Visualization and Virtual environments, University of Kentucky, lexington, KY
Background: Reading books with children improves their literary skills. Children with ASD often cannot fully engage in the story due to off-task behavior and short attention span. Better self-regulation has been reported with interactive digital books or ebooks (Pykhtina et al., 2012). Ebooks open doors to the use of new sensors. This work focuses on an innovative use of a color-depth (RGB-D) camera to incorporate video self-modeling (VSM) into ebooks. VSM uses specially-prepared video of the child to model a target behavior (Dowrick, 1983). It is an evidence-based intervention shown to be effective for various learning tasks (Buggey, 2009). To combine VSM with ebooks, we propose to portray the reader as the protagonist of the story in a video shown next to the ebook. We hypothesize that the visual immersion of self into the story holds the attention of the reader and promotes self-regulatory activities.

Objectives: The aim is to build a video application, MEBOOK, to overlay features of the protagonist on the face of the reader and replace the scene background with animated video pertinent to the story.

Methods: MEBOOK runs on a computer equipped with a RGB-D camera. The depth data allows us to separate the reader from the background and track a 3D model of the face in real-time.  We use the 3D model to insert synthetic objects that are geometrically aligned with the face independent of pose and motion. For example, the trunk of Dumbo can be visually attached to the reader’s face.  Also, the 3D model allows us to adjust the viewpoint so as to create the illusion that the reader is looking directly at him or herself. The separation of foreground from the background is akin to the “green”-screen technology used in newscasts to substitute in a more suitable background, such as a real jungle video, while leaving the foreground objects intact. The user interface of MEBOOK has the rendered video next to the story text. Highlighted keywords allow the reader to change characters and background video. 

Results: This study is to demonstrate the feasibility of MEBOOK, an application that utilizes a RGB-D camera to capture a 3D model of a child which is used to render self-video depicting the reader as part of a digital story. Using an appropriately chosen story, we will demonstrate the real-time response of the system, the overall quality and robustness of the rendering as well as the intuitive and engaging user interface of the overall application.

Conclusions: We have designed a software application, MEBOOK, to enhance digital story books with interactive visualization tools, making them suitable for children with ASD. The novelty is the use of the child’s face as a character in the story, a form of self-modeling, to engage the child in the story. A subsequent study to measure the effectiveness of MEBOOK in enhancing comprehension and self-regulation in reading among 5-7 years old children with ASD is underway.

| More