Eye-Helper Day 1 (06/02/2014)

Greetings from team eye-helper!

Before we tell you about what we did on our first day of summer research, here's a brief introduction to our project...

Our current goal is to help the blind/visually impaired shop without needing help from in-store personnel. We have been informed that this is a problem area in the blind/visually impaired community, and current solutions to the problem include barcode scanners and emailing a grocery list to the store beforehand. Part of our hope is that this technology could allow blind users to browse and shop at the grocery store at any time and without relying on the employee or needing to plan ahead. However, we have heard multiple opinions from people we have spoken to, and it seems unclear about what features in this technology would be most appreciated. For example, should this technology focus more on navigation and obstacle avoidance, or should this technology revolve around specific grocery item identification on the shelf in front of the user, or something else? We also don't know enough about the current technologies (and their pros/cons, as perceived by people who use them on a daily basis) to design good interfaces for this technology. We hope to talk to members of the blind/visually impaired community to both learn more about their daily lives and gain insight on how to make our technology as impactful as possible.

By the way, we're open source! Our code can be found at...

Today we started working on campus - we'll be continuing the work that Emily and Cypress started last semester (which was mostly focused on learning device communications with android/node platforms. yay sockets!).

In regards to prototypes, we have an android/Google glass application that can capture images and stream them to a webapp (eye-helper.com). We can also communicate to the smartphone by typing into the chatbox on the webapp. This is done through sockets and Google's TextToSpeech API. Screencaps of this can be seen below. At the moment this just has the device communications set up, but object tracking and crowdsourcing have yet to be implemented.

One of our major tasks this week is to learn more about our lovely users! Our project is aimed at the blind/visually impaired, but this needs to be narrowed down a bit. For example, should we design for the tech-savvy blind community? Are there different subsets of our user group that we need to consider? We have a bunch of open-ended questions that we hope to discuss with our users through phone calls and in-person visits before making design decisions about our interfaces. We had our first phone call with a user today - we’ll talk about it in a few paragraphs.

As with any team project, we had to get ourselves organized. We've chosen to create a rolling to-do list and assign people to tasks as we go, so everyone can be heavily involved with each "subsystem" of the project. (To clarify, the subsystems were the aspects mentioned in the description above - in short, the user research, crowdsourcing interface, and computer vision aspects.)

talk about our phone call with other paul... (use pseudonym?)

After having our first user conversation of the summer, we created a people portrait to visualize our notes (it includes quotes, general info, their first impressions/thoughts about eye-helper so far, and our observations/comments about the experience). We will probably be creating similar posters for all of the other users we meet in the near future. This conversation was quite an exciting update - we realized that we definitely need more context on the different subgroups in the blind/visually impaired community and that everyone has their unique traits and lifestyle habits that may or may not make some of the current features/concepts of eye-helper a moot point.

Looks like that’s it for today - stay tuned for daily updates from the research team!

--Emily