Apple (News - Alert) did it again.
While the sexiest computer manufacturer did not invent voice control by any means, it did focus attention both on the usefulness and the business case for voice control in our modern world of the cloud and mobile data. Siri, for all its flaws, has kicked off another fundamental change in technology.
But that was in 2011, when Siri debuted on Apple’s iOS and people like my dad first discovered that voice control could be a killer app, even if they didn’t know the term “killer app.” Since then, the technology industry is abuzz with the promise of voice control.
“From phones, tablets and TVs to cars and, yes, kitchen appliances, voice-controlled computing is weaving its way into our lives,” noted John Paul Titlow in a recent piece for ReadWrite. “Only 15 months after Siri's arrival, voice-controlled computing is barreling ahead into the future.”
The Consumer Electronics Show (CES (News - Alert)) earlier this month was chocked full of voice-controlled gadgets for every room in the house. One of the most significant voice-related announcements was from Nuance (News - Alert), the team behind Dragon Naturally Speaking, who has recently been making big strides with better voice dictation and control.
Nuance revealed its Watermute project this month at the show, which promises intelligent, cross-device voice commands that will be consistent and personalized across TVs, smartphones, tablets and other gadgets. Questions such as “play that song I listened to this morning during my jog” now could end in a fruitful result while relaxing in the living room with a protein shake.
Overall, the global voice recognition market is predicted to grow by more than 22 percent annually through 2016, according to TechNavio. What’s driving the energy in the voice control market is not just the elegant use case introduced by Apple; technological innovation has also been making recent strides.
The improved accuracy of voice recognition and smarter natural language understanding are making voice control more useful and no longer subject to training. Even more, mobile device ubiquity and machine-to-machine trends have meant that there are more devices that can accept speech commands and potentially do something with it.
Perhaps most significantly, the emergence of cloud computing has meant that voice recognition processing can be performed on servers instead of handheld devices themselves. Not only do this mean faster and better processing, but it means systems can learn from the overall body of users, and essentially, crowdsource the challenge of helping computers decipher what people are saying. Most leading tools, including Siri as well as those offered by Nuance, rely on access to the cloud for the magic of voice recognition consumers are now starting to experience.
“The improved accuracy of voice recognition and smarter natural language understanding will combine with our devices' increased awareness and ability to better discriminate between sounds to create a far more capable, intelligent system for communicating with machines,” Vlad Sejnoha, chief technology officer at Nuance, recently summarized.
So in short, thanks Apple.
Want to learn more about the latest in communications and technology? Then be sure to attend ITEXPO Miami 2013, happening now in Miami, Florida. Stay in touch with everything happening at ITEXPO (News - Alert). Follow us on Twitter.
Edited by Allison Boccamazzo