New Google Glass App Can Help You With Discerning Emotions
Google Glass might become the perfect tool for people who are having problems discerning emotions. How come? Well, with the help of the Glass app, powered by Sension. Catalin Voss, its founder, believes that the Google Glass app he’s developing shall help many folks with discerning emotions.
Demonstrating what the app si capable of was quite simple. A Google Glass wearer stared at an 18-year-old with a blank expression. The display was quite simple, showing the word „neutral”. Then, after a sudden smile, the word changed to „happy”.
The person having many reasons to smile is Catalin Voss – entrepreneur and Stanford student from Germany. His history is quite simple: working on iPhone apps since he was 12. At the moment, though, he has a team compounded of students and they all work together on tool that provide emotion-recognition. This is to say that these tools could be very useful in order to improve education and training, by the monitoring engagement.
Also, what’s even more important for the society is that the Glass app might be helpful for the ones in need. How? Voss has a cousin with autism and thinks that with the help of the Glass app he will interact in more natural ways, being able to express himself more.
The providing company, Sension, is one of the companies that compound the business of emotion-recognition technology. The company has the offices in the Menlo Park of Highland Capital for the summer and it seems to become a must for those in need of analyzing facial expressions. More, the tools of the company can detect vocal patterns for signs of different emotions, such as Happiness, Sadness, Frustration and even Anger.
The funny thing is that among the potential applications are ones which sound like this: you feel depressed and your tv knows this. Might it broadcast an ad for a fast food product or restaurant
But, maybe the most important goal to achieve is to create a bridge between humans and machines and make them communicate. This is not a piece of news, as we know that there is a long history of human-computer interaction (we refer here to Microsoft with the Kinect controller also or the voice-recognition services such as Google Now and Siri). What the above machines do is that they can understand more than words and gestures, going into the deep world of feelings.
What people say is that it could really improve the digital interactions. For instance Susan Etlinger (who is an industry analyst with Altimeter Group) conisders that the improvement of the digital interactions could get to the point where people also get heard, not only listened to.
Let’s take an example for what automated customer service systems could do. When the tone suggests your blood is beginning to boil they can escalate calls to human operators. Also, the apps can adjust the reactions when it comes to screaming on your smartphone (of course, compared to speaking calmly) and this can happen to tapping also.
Responds to mood
Dan Emodi, the vice president at Beyond Verbal (which is an Israeli company that started their business with the technology that can detect the emotional states of mind ) considers that the Siri app understands not just the words, but it can also suggest the feelings that the user has and is able to come back with an answer that matches the mood of the user.
This tool and all the others add a new dimension to the relationships between humans and machines. They could actually change the whole perspective we had until now. Also, what the company tries to implement is the training of people in order for them to become better managers (or interviewers or even in their personal life – better parents). How they do this? By making them understand their own emotional state and how they get to adapt to it.
The method through which these tools work is by using machine- learning algorithms and also train the software by using video or audio captures of people when they are happy: for instance a smile or a cry or even a facial expression. The facial expression analysis is a very used research in academia. The face tracking system is also known under the name “constrained local model” which might be translated in tracking points on the face of the analyzed person. That is how the expression can be read on the face. Sension uses the tool that tracks 78 points on the face, which includes central areas such as the center of the pupil or the corners of the mouth. Also, the arch of the eyebrow is a key area on the face.
Still, taking into account that the analysis includes dozens of points in three dimensional areas, the results are difficult to appear and be delivered in real time, especially when it comes to small devices such as the smartphone or Google Glass.
Studies face
In order to get to more concrete results even more rapidly, what Sension does is target specific points and focus on face changes (using the shape of the face).
Do you think this is the future of communication between humans and machines? Will they become part of our feelings also? Well, this might seem a bit scary for the moment, still, taking into account that there are people with problems (such as the autism example in the article), this could be a beginning of an interesting new chapter in our lives.