I don’t know about you, but I’ve developed a love-hate relationship with voice-texting. On one hand, it’s so much easier than typing out my messages, especially when I’m on screen overload. On the other hand—my phone seems to be about 70 percent accurate in recognizing my words, which means that I end up going back in to edit my texts 30 percent of the time. So why do I keep trying? Because the system holds so much promise. Even the possibility that voice-texting will save me screen time keeps me coming back over and over again.
Turns out I’m not alone. For the last two years, conversational systems have been marked a trend to watch by industry leaders worldwide, indicating that for many of us, texting and typing no longer cut it when it comes to technological interaction. Siri has become my go-to search engine. My kids talk to Alexa as much as they talk to each other. We crave that feeling of having a real conversation just as much as our hands and eyes crave a break from the constant typing. And the industry is finally catching up with our demands.
With artificial intelligence (AI) and machine learning reaching a “critical tipping point” and the IoT continuing to expand, it makes sense that developers are focusing on ways to strengthen the “digital mesh” that holds these technologies together. As we look forward to a world of drones, robots, and self-driving cars, we also need to look toward innovative ways for these things to converse with us—and each other. That means not just receiving one-way directives, but in understanding two-way interactions that proactively meet our needs. That’s where conversational technology comes in. In the past few years, it’s gotten increasingly stronger, and the following are just a few reasons it will likely continue to grow.
When we talk about conversational systems, we aren’t just talking about a computer’s ability to understand the human voice. We’re also talking about system-to-system communication, which is what makes the IoT tick. As the IoT continues to grow, these conversations become a greater necessity. What’s more, these systems won’t be relying on voice and text alone. They’ll be using sight, sound, and feeling to process and “understand” these interactions, further blurring the lines between the digital sphere and the reality in which we are living.
As the development of AI-powered virtual assistants becomes a near-reality, it’s clear that accurate communication must be part of an effective assistant/owner relationship. That can’t happen without the development of strong conversational systems. Companies like Baidu, China’s largest search engine, have made huge strides in the accuracy of conversational systems. Its Deep Speech 2 technology can sometimes transcribe Mandarin more accurately than a person can. Even more amazing—it uses a universal speech system that can learn English as easily as it does Mandarin, meaning the technology could easily become universally available. In fact, Andrew Ng, the company’s chief scientist and associate professor at Stanford University, has said, “I hope to someday have grandchildren who are mystified at how, back in 2016, if you were to say ‘Hi’ to your microwave oven, it would rudely sit there and ignore you.” It seems almost everything will be an assistant soon enough.
Chances are good you’ve chatted with a robot recently without even realizing it. More and more, companies are using automatic chat-bots to help customers clear up issues quickly and easily. The bots save time and money over traditional customer service centers, which required lots of human bodies and usually left customers frustrated due to long hold periods. As the chatbots become increasingly strong in terms of emotional intelligence and empathy, they may also lead to even higher customer satisfaction scores than their human counterparts. In fact, Gartner estimates AI will count for 85 percent of customer relationships by 2020.
Granted, conversational systems are still being perfected, but huge strides are being made in understanding more complex sentences and requests. Microsoft is working on a natural user interface (NUI) that combines natural language with gestures, touch, and gazes, to help deepen the system conversations. As one writer noted, we can begin to imagine a search engine that does not require a screen or search box to find an answer—everything can be searched by sight, touch, or sound. That’s the kind of “conversation” I’m excited about.
Additional Resources on This Topic:
17 Tech Trends to Watch for in 2017
Time for Chatbots to Get Smart
When AI Chatbots Attack: What You Need to Know About Programming Empathy
Photo Credit: martinlouis2212 Flickr via Compfight cc
This post was first published on Futurum.xyz
In this guest contribution from Steve Vonder Haar, Senior Analyst with Wainhouse, a Futurum Group…
In this guest contribution from Craig Durr, Senior Analyst with Wainhouse, a Futurum Group Company,…
Futurum's Daniel Newman dives into the recent announcement coming out of Micron, that they will…
Futurum analyst Michael Diamond recaps the Amazon Devices and Services event and reviews some of…
Futurum senior analyst Steven Dickens provides his take on the latest announcements coming out of…
Futurum’s Ron Westfall and Daniel Newman examine Micron’s financial results for the fourth quarter 2022…