Tone of voice can be a very valuable tool in determining mood, and both content and context of speech. A warm compliment turns into a scathing insult with the right shift in tone and inflection, and some people even have a tough time picking one from the other. A new development from Beyond Verbal, meanwhile, may well help change that development for wearable technology items, allowing these increasingly prevalent items to recognize changes in tone, and potentially, make changes in responses accordingly.
Beyond Verbal took its new system to TedMed Jerusalem, where it got a bit of testing as part of a wellness tracking system. The testing reportedly went so well that the company recently announced the new Wellness API, specifically geared for developers to take any smart device with a microphone and use it as a responsive personal trainer system. This includes, at last report, devices like FitBit and Jawbone, and with the Wellness API, such devices will be able to track not only things like workout activities undertaken and the like, but also, it will be able to track the wearer's emotional state throughout the day.
Reports suggest that the Wellness API will allow the system to pick up on several separate subclasses of mood, ranging from the most obvious to more subtle undertones, and will present what it finds to be the two most obvious emotions in each range, offering up even an “emotional map” of seven leading emotions, complete with percentage listings based on how often it appeared.
The Wellness API isn't Beyond Verbal's first foray into emotion, either; Beyond Verbal actually offers a free app geared toward the consumer. Known as Moodies, the app allows users to get a better handle on emotional subtext running under spoken words. The Wellness API, meanwhile, is said to be a refinement of Moodies' earlier offering, adding in a few extra features for added value. Meanwhile, there are actually some other firms in the market like Affectiva, who recently raised a combined total of $21 million ahead of the release of its software development kit (SDK), as well as Emotient. Emotient offered retailers an exciting new prospect in the form of being able to analyze shoppers' facial expressions for emotional cues that could lead to different responses from the store and, in turn, a better customer experience.
There seems to be quite a bit of market proceeding in this direction, and with good reason. The problem with emotional cues, as provided through facial points and tone of voice, is that it's rather easy—more so for some than others, of course—to misread or otherwise misinterpret these points. Having a tool on hand that can more conclusively identify these somewhat subjective measures could be extremely helpful for everybody from retail store clerks to corporate sales reps. That's a valuable tool; users would know when it was a good time and a bad time to talk, almost from the first few sentences, and could reframe sales pitches accordingly. It would remove a hefty dollop of ambiguity from everyday interaction with people, a point that many would likely pay for.
Only time will tell just how this ends up being put to work, of course, but the early going suggests that this field is one that has plenty of possibility. Being able to go beyond verbal—as the name implies—could mean quite a bit of value for many users, and we could see more uses for this technology in the very near future.
Edited by Maurice Nagle
Wearable Tech World Home