Lance Nanek, the developer behind My Monitor app that we’ve presented you last month, is now working on controlling the Google Glass UI with head movements and it does work really nice.
The developer is trying to overcome issues caused by the small display and limited input options on wearable devices and uses Glass’ built-in sensors to control the UI with gentle head movements.
You can watch the video below to understand how this works. It does make it easier for developer to create more advanced apps, that the user can easily navigate through.
The example you can see in the video uses smooth animations to improve transitions between the views and to delay the transition. Without this animation, the views could just transition back and forth due to the sensor’s high sensivity.
We’re expecting this to be improved further and new Google Glass apps should come with really neat functions, considering that there are a lot of open source functions already available, dedicated to helping developers. Thanks to people like , we should soon have advanced applications that can ease our lives and improve the way we see the world through Glass.
All his work is open-source and available on GitHub, fortunately, so every other developer can use it in order to create a better user experience for its applications. The codes used in this example can be found at the following locations: , , Sensor Fusion.