The main question in our minds when Google Glass was introduced to the wearable’s market was: Why would I wear such a device on my head and what are the benefits of its usage? Well now, after more than a year and a half of Glass tests we are more than ready to provide you an answer.
Of course, there are a lot of sides that haven’t been fully tested because of Glass’s limited availability but thanks to the worthy developers that worked on developing Glass software in the past year, we can draw a sketch of what to expect from the device when it will be launched on the mass market and also, we can present a few of the apps and functions that will make Glass the new It device of wearable computers.
So, everyone has been developing Glass software and some of the concepts that appeared are stunning, nonetheless. While some developers focused on developing applications especially for head mounted devices, some tried to put together software that would help the user stay more focused on his daily tasks by gathering useful existent software in the Glass.
For instance, we give you the Moment Camera, an app that uses Glass’s 5 MP camera in order to take pictures at a short time interval (x seconds) when it detects faces in the nearby area. Taking advantage of the “Glass’s awareness” system, Satish Sampath and Kenny Stoltz developed the Moment Camera so that it would also use Glass’s gyroscope, accelerometer and compass to estimate the perfect moment to take a shot. After that, the photos are uploaded to a remote server where the best shots are kept. Stoltz mentioned that this type of system is impossible to achieve with a regular Smartphone or tablet and that it offers people the possibility to automate some tasks and direct their attention elsewhere.
Another similar app, developed by Thad Starner, a Georgia Tech professor and technical lead on Google’s Glass can transcribe what a person is saying on Glass’s display so that it would be a real help for persons with hearing problems. In a recent interview given for Wired publication, the professor said that by having a head-mounted display people with impaired hearing could watch the interlocutor face and also watch the display, which retrieves the discussion.
Those of the above are part of the newly developed software for the Glass and now we will direct our attention towards apps that have been firstly developed for smartphones, but can be used better on the Glass, mainly because of the hands-free system and the 25-inch HD display. Or at least, this is what the guys from Quest Visual believe since they’re thinking of adding their Word Lens app on the Glass. Mainly developed for smartphone usage, the app is able to translate in real time signs you see, without being connected to the Internet. Bryan Lin, the leader of Android development within Quest Visual said that considering Glass’s camera and display, World Lens would be perfect for the device.
Glass’s World Lens was developed nearly two months ago and looks pretty similar with its smartphone correspondent, with small differences in the user interface. So, while looking at the sign, you’d have to say “Ok Glass, translate this” and the app would display a descriptive text of the sign.
Of course, developing Glass software isn’t that different than Android software since the device is Android based. And even though the access is limited due to privacy issues, some developers have managed to go behind its limitations and develop new software (for instance the facial recognition features). Stephen Balaban from Lambda Labs developed facial recognition tools, which can be further used by other developers to increase the capacities of their software. Google doesn’t encourage the usage of these features and apps using it are restricted from Glass’s official app stores.
One of the main issues with Glass is its fast battery drainage. Even though it should resist for 24 hours, if you use it for watching or recording videos it could drain in just a half of day or even faster. Apps using some of these features could wear its battery in one or two hours if used intensively.
As Bryan Lin stated, Quest Visual tried to fix the battery issues by running their translation algorithm when zooming in, thus reducing the activity of the CPU, and this action took them over a month to complete.