19529
Littlehelper: Using Google Glass to Assist Individuals with Autism in Job Interviews
Objectives: To develop an application called LittleHelper on the Google Glass platform to assist individuals with ASD in maintaining appropriate eye contact and speech volume in an environment simulating a job interview with a single interviewer.
Methods: We rely on a face detector to detect the location of the interviewer’s face. When the face is off-center, a red square will appear at the center of the display, and a flashing semi-transparent arrow will guide the user to turn so as to align the interviewer’s face within the square. With the cooperation from a local server, LittleHelper progressively builds an image mosaic of the entire environment so as to locate the face if it falls out of the visual field. For the sound level detection, we use a temporally-smoothed root mean square (RMS) in measuring the sound level from the audio signal. We first train a noise floor level to adapt to the ambient sound level. This noise floor allows us to determine when the user speaks and the corresponding sound level. In an offline process, we determine the socially-acceptable voice volume as a function of the ambient noise level and the distance with the interviewer. The latter is estimated based on the size of the face as detected by the face detector. At real-time, we measure the voice level of the user and use a graphical slider to visually indicate if the level is within the acceptable range.
Results: We have measured the socially-accepted volume range based on the opinion of five subjects, in response to the same speaker speaking at different volume levels and at different distance from the subjects. The distance ranges from 0.5m to 3m at an interval of 0.5m. The voice detection and visual feedback are all computed in real-time locally at the Google Glass. To build the image mosaic, LittleHelper sends an image and the face location to a local server through Wi-Fi every one second to create the image mosaic (Brown and Lowe 2007). Figure 1 shows the two screen shots captured from the display when the Google Glass user followed the arrow towards the face and started speaking loudly.
Conclusions: We have designed a Google Glass app, LittleHelper, to help individuals with ASD to maintain proper eye contact and voice level during a job-interview-like setting. The current limitations include the requirement of a server to offload complex computation and the inability to handle multiple persons in a social setting.