Through the (Google) Glass with Autism
The “gee-whiz” factor got knocked out of Google Glass pretty quickly. A little over two years after it was introduced to the public in a test version, Google put Glass back under deep research cover after people developed an almost instinctive aversion to its “in your face” intrusiveness, not to mention its poor functionality.
One of the biggest criticisms was how much people felt the device got in the way of social interaction. But in a different take on this technology and its potential, Google Glass is now being used as a means to help children with autism better recognize human emotions and navigate their social worlds. At least two groups of researchers are developing apps for Google Glass to help children with autism better “read” the facial expressions and social cues of those around them. In doing so they are exploring some fascinating intersections between neuroscience and the nature of human interaction.
At Stanford, the Autism Glass Project completed a Phase I clinical trial last year in which children wore the glasses and were “coached” by the device to identify the emotions on images on a monitor. When the device’s camera detects an emotion such as happiness or sadness, the wearer sees the word “happy” or “sad” or a related emoticon flash on the glass display. After completing a series of exercises, the children were better able to identify and differentiate the emotions they saw. They also looked more frequently at people’s faces and made more eye contact.
Now the researchers are recruiting families in the Bay Area (and soon nationally) with children (ages 4-17) with autism to participate in a study of Autism Glass as a therapeutic tool in the real world of the home environment. Kids will wear the device for three 20-minute sessions each day and Glass’s head motion tracking sensors and a custom-made infrared camera for eye tracking will be used to analyze the wearer’s behavior while interacting with family members and friends. Whatever the wearer sees can be saved onto a smartphone app for parents and children to review together later, facilitating discussion about emotions and their real-life contexts. The idea is not so much that people will wear the device all the time, but that it will aid in the treatment process during behavioral therapy. In an abstract the researchers point out that “computer-assisted in-place treatment systems have been studied for years, little work has been done to bring the learning process away from the flashcards and into the daily life of people with autism spectrum disorder.”
In Cambridge, Mass., the start-up Brain Power , founded by neuroscientist Ned Sahin and based on brain science developed at MIT and Harvard, has developed a suite of applications for its “neuro-assistive wearable device” using the Google Glass platform. In one of the apps focused on social interaction, the wearer sees two small icons on their screen when looking at a face, and by nodding slightly can choose one or the other that best represents the emotion being expressed. The wearer earns points for each correct selection. A key aspect of this approach is that feedback and reward all happen in real time, during real interactions, and encourages further interaction rather than the self-immersion more typical with computer-based learning. Brain Power is sponsoring the “BE YOURSELF” clinical trial to further investigate this and other applications for children and adults on the autism spectrum.
It’s comforting to know that a powerful technology like Google Glass just isn’t that exciting anymore if all it does is enable us to surf the web or check our emails faster. And it’s inspiring that the potential of this technology is being tapped in a completely different way to help understand and solve the most human of challenges.