Crowdsourcing for Assitive Technology

This project starts from the following premise: there is a huge pool of people that want to volunteer to help others, but don't due to a variety of factors including: the difficulty of investigating where their talents can be utilized, physically commuting to the location for volunteering, and finding times where they are free and their services are needed. Wouldn't it be great if we could overcome all of these limitations and thus free up this huge untapped pool of goodwill? We are working to harness cutting edge computer science to do just that by leveraging human volunteers to power intelligent machines for helping disabled populations. Consider this video detailing the experience of a blind person going shopping in a grocery store. In the video, a blind person receives help while shopping from a grocery store employee. However, this service is not available at every store, and working with the employee often makes the blind person feel uncomfortable. We are working to create computer-based assistive technologies to help the blind shop. Our current design is to utilize a pair of goggles that the blind person wears that are outfitted with a camera. The camera will be continuously streaming video and audio back to a sighted volunteer. The volunteer will help the blind person identify particular objects of interest based upon the stream of video images coming across the camera feed. Once the objects are identified, through a combination of automatic low-latency computer vision-based guidance and additional input from the volunteer the system will help the blind person approach and grasp the object effectively.

Picture from: http://grozi.calit2.net/

Research Questions:
  1. Interface Design: how can we create an interface that works for enabling effective communication between the sighted volunteer and the blind user?
  2. Computational: how do we effectively stream video and audio from the device?
  3. Computational: how do we provide low-latency automated feedback to the user as they approach the object (assuming the volunteer has identified the appropriate object)?
  4. Computational: how can we combine higher-latency feedback from the volunteer with low-latency automatic methods for object tracking?
  5. Design: how can we evaluate and prototype this system in a way that is targeted towards the user population?
  6. Machine Learning: can we utilize the data we are gathering from this product to train an automatic computer vision system for object recognition?
  7. Hardware Design: what is the best hardware platform for this assistive technology? Do we build our own or buy an existing one?