Developing for Google Glass 100

I keenly attended the entire Glass track at Google IO on Thursday. It started with an overview session talking about best practices, overall project vision and started dipping into the Mirror API, and ended with a session about hacking (rooting) the glass device, and an open Q&A with the Glass team.

There are two methods for developing for Glass; integrating web services with the Mirror API, and developing native apps. Since the GDK (Glass Developer Kit) is still being built by Google the native app option isn't really supported yet. You have to switch your glass to debug mode and side load using the Android Device Bridge to be able to run native android apps on it, and although its possible its not something the average user will be doing, nor is it remotely optimized. For that reason over the next few articles I will be focusing on the Mirror API.

Rather than jumping right into technical details, in this first I am going to go over what it's like to use glass and an overview of how the software works.

The first thing that surprised me about Glass when I tried it was that I didn't have to adjust the focus of my vision to see the projection clearly. Whether I was looking at someone a few feet in front of me or something across the room, I never seemed to strain my eye to focus on the GUI. Secondly, just like a smartphone, it doesn't work great in bright light or noisy areas. Going in, I kind of expected the bone-conduction speakers to cut through the rest of the noise and offer crisp audio, but that wasn't really the case.


The entire glass experience currently rests on 'cards', Most of the interaction is done by touching the pad on the side of the device; between your right ear and forehead. You tap the touch pad to select an item, swipe forward or back to go left and right, swipe down to go back, and swipe down with two fingers to dismiss a notification. To keep things simple all of the user interface is managed through these 'cards' which take up an entire screen each, and you simply flip between them.


Searching Google is interesting, if your asking a question like 'how far is it to Seattle' or 'how old is Morgan Freeman' it spits you back the answer concisely, which is great. When your searching for something that isn't as specific, for example, "Drupal Development Vancouver", it shows you one search result per card starting from the top result. You cannot select a search result to actually go to that website, so your left with only whatever blurb of content is in the result. If and when Glass becomes mainstream, I think really well engineered SEO is going to be even more important than it is today.

For Glass to be a useful and enjoyable experience, the user interface has to be kept simple and quick. Its annoying having to flip through tons of cards to find what your looking for, and its not something you would sit down and watch a 10+ minute video on. It is, however, awesome at receiving quick messages; things conducive to twitter, sms and news headlines. Of course navigating with Glass is also extremely useful, and the camera component is awesome too.

The Mirror API runs through Google+ authentication. When you develop a Mirror API service for Glass, first the user must agree to have the service push cards to your device. A really great example of this is CNN's pilot Glass service. To sign up for it you navigate to their page with a regular browser, then simply click the "sign in with google" button, (OAuth). Then you select what categories of news you are interested in, and what times of the day you would like CNN to push you headlines. CNN will talk to the mirror API and push breaking headlines to your account, and Google will transmit the messages to your device.

In future articles I will go over more specifics and technical details of what its like to develop for glass, and what currently is and isn't possible. Stay tuned!