IVRs without Frustration: Speech Recognition Gets the Human Touch
October 19, 2012
By Tracey E. Schelmetic, TMCnet Contributor
If you’ve ever used an interactive voice response (IVR) system – and unless you’ve been living on a desert island, you have – you’ll know that there are some truly bad systems out there. The earliest voice-driven IVRs that used speech recognition often required users to shout into the telephone, repeating themselves until they nearly spontaneously combusted in frustration when the system returned the message, “I didn’t understand your response” for the sixth time.
Luckily, the technology has come a long way.
Once upon a time, the number of response combinations that the systems could understand were very limited, which is why it had to restrict your responses (“say ‘one’ for the customer service department” instead of simply asking you to describe what you’re looking for in natural language). Those days are nearly behind us, thanks to newer solutions offered by companies such as Massachusetts-based Interactions, which offers a conversational natural language solution that allows people to speak to computers as if they were live agents.
Interactions’ solution leverages a combination of automated speech recognition (ASR) and what the company calls “human assisted understanding” (HAU). HAU improves accuracy and natural-language understanding by supplementing speech recognition when it can’t perform. In traditional speech-recognition applications, all requests get routed directly to an ASR engine. When the engine can’t recognize something, it keeps re-prompting the caller, or eventually gives up and transfers the call to a live agent. This limitation causes poor application design and performance – and frustrates callers. Interactions says it has overcome this application-design limitation.
Rachel Metz of MIT (News - Alert) Technology Review says it’s about more than simply routing calls with less frustration.
“Interactions' software is, hopefully, more than a solution to impossibly annoying automated support systems,” writes Metz. “It's also an example of software and human intelligence working together. Rather than relying entirely on software to handle calls, Interactions automatically hands speech that its software can't cope with over to human agents, who select an appropriate response.”
Who would have thought that humans interacting vocally with computers could be a source of anything but a nervous breakdown?
Edited by Rachel Ramsey