Imagine that in the near future you will be able to communicate with devices through a whole new method, which doesn‘t suppose an actual interaction with the device. Lately, developers have started directing their attention towards this concept and results have already appeared.
David Way, from MIT Media Lab is one of the first developers who modified the Glass so that it would respond to gestures as a second way of interacting. He thinks that all the recent wearable devices represent another category of instruments; therefore they should be used accordingly.
Hal Hodson from NewScientist is working with David Way in testing and improving the gestural interface of the Glass. David is gathering information with the help of a depth camera tied to the wrist of Hal. All the data is then used to create a personalized pattern of how he performs specific gestures, such as the open palm, fist, lifted finger or air wipe.
As Hal lately reports, David is working on an interface that can communicate with the Glass through gestures. His main goal is to create a typing application, which could be further used to test what kind of movements are easier, so that they could be integrated while discarding the awkward motions. The first step towards his goal is developing a gestural interface, which could learn the user’s preferences and then adapt to them. However, there is a long way to go before we could see the intuitive interface working.
As we mentioned in the beginning, David isn’t the only developer working on a gestural interface. Have you heard about the Heart of Glass? Well, this is Google’s homemade method of gestural input. But besides the search giant, there are a few more companies that thought of that. Here, we refer to 3dim, a new company which has an alternative way of creating gestural input.
The prototype designed by the founders of 3dim, Ahmed Kirmani and Andrea Colaco started its ascent as a Google hack within MIT’s labs. Their system use LED infrared lighting and photodiode sensors and opposite to Way’s method of strapping the camera to the wrist, they have attached the system on the brow of the Glass. When moved, the user’s hand reflection would be picked up by the sensors and transmit a code.
As the developers mention, the system can be integrated directly into the Glass, without the need of another device attached to your body, consumes less power than the depth camera and the overall product is a lot cheaper.
The downside to their system is that it isn’t that accurate enough and could not be used for typing. But as the researchers mentioned, it has been created for a different purpose. It can detect wide gestures and hand motions, meaning it would be rather useful to the Google navigation kit or for swiping through mails or notifications. The detected motions would be next linked to an action; for instance, drawing a letter could trigger an app beginning with that letter.
Shahzad Malik from Intel, Toronto says that the concept is way wider and it could be applied in gaming as well. He also believes that the gesture control capabilities come as natural as the voice controls for the Glass.
So, the promises and expectations from this device have reached rooftops and frankly people everywhere seem to be more and more eager to get their hands on the Glass. But until the device becomes available for everyone, we will make sure you stay posted with every new story.