Google Tech Talk
June 15, 2015
(“show more” for more information)
Presented by Mahadev Satyanarayanan, School of Computer Science, Carnegie Mellon University
GPS navigation systems have transformed our driving experience. They guide you step-by-step to your destination, offering you helpful just-in-time voice guidance about upcoming actions. Can we generalize this metaphor? Imagine a wearable cognitive assistance system that combines a device like Google Glass with cloud-based processing to guide you through a complex task. You hear a synthesized voice telling you what to do next, and you see visual cues in the Glass display. When you make an error, the system catches it immediately and corrects you before the error cascades.
Fortunately, this is not science fiction. To support this genre of applications, we have created a multi-tiered system architecture called Gabriel. This architecture preserves tight end-to-end latency bounds on compute-intensive operations, while addressing concerns such as limited battery capacity and limited processing capability of wearable devices. We have gained initial experience on building Gabriel applications that provide user assistance for narrow and well-defined tasks that require specialized knowledge and/or skills. Specifically, we have built proof-of-concept implementations for four tasks: assembling 2D Lego models, freehand sketching, playing ping-pong, and recommending context-relevant YouTube tutorials. This talk will examine the potential and challenges of wearable cognitive assistance, present the Gabriel architecture, and describe the early proof-of-concept applications that we have built on it.