I don’t know about you, but I’ve developed a love-hate relationship with voice-texting. On one hand, it’s so much easier than typing out my messages, especially when I’m on screen overload. On the other hand—my phone seems to be about 70 percent accurate in recognizing my words, which means that I end up going back in to edit my texts 30 percent of the time. So why do I keep trying? Because the system holds so much promise. Even the possibility that voice-texting will save me screen time keeps me coming back over and over again.
Turns out I’m not alone. For the last two years, conversational systems have been marked a trend to watch by industry leaders worldwide, indicating that for many of us, texting and typing no longer cut it when it comes to technological interaction. Siri has become my go-to search engine. My kids talk to Alexa as much as they talk to each other. We crave that feeling of having a real conversation just as much as our hands and eyes crave a break from the constant typing. And the industry is finally catching up with our demands.
With artificial intelligence (AI) and machine learning reaching a “critical tipping point” and the IoT continuing to expand, it makes sense that developers are focusing on ways to strengthen the “digital mesh” that holds these technologies together. As we look forward to a world of drones, robots, and self-driving cars, we also need to look toward innovative ways for these things to converse with us—and each other. That means not just receiving one-way directives, but in understanding two-way interactions that proactively meet our needs. That’s where conversational technology comes in. In the past few years, it’s gotten increasingly stronger, and the following are just a few reasons it will likely continue to grow.
Conversations—They’re Not Just for Humans Anymore
When we talk about conversational systems, we aren’t just talking about a computer’s ability to understand the human voice. We’re also talking about system-to-system communication, which is what makes the IoT tick. As the IoT continues to grow, these conversations become a greater necessity. What’s more, these systems won’t be relying on voice and text alone. They’ll be using sight, sound, and feeling to process and “understand” these interactions, further blurring the lines between the digital sphere and the reality in which we are living.
Virtual Assistants Need Better Ways to Communicate
As the development of AI-powered virtual assistants becomes a near-reality, it’s clear that accurate communication must be part of an effective assistant/owner relationship. That can’t happen without the development of strong conversational systems. Companies like Baidu, China’s largest search engine, have made huge strides in the accuracy of conversational systems. Its Deep Speech 2 technology can sometimes transcribe Mandarin more accurately than a person can. Even more amazing—it uses a universal speech system that can learn English as easily as it does Mandarin, meaning the technology could easily become universally available. In fact, Andrew Ng, the company’s chief scientist and associate professor at Stanford University, has said, “I hope to someday have grandchildren who are mystified at how, back in 2016, if you were to say ‘Hi’ to your microwave oven, it would rudely sit there and ignore you.” It seems almost everything will be an assistant soon enough.
Faster, Better, Cheaper Customer Service
Chances are good you’ve chatted with a robot recently without even realizing it. More and more, companies are using automatic chat-bots to help customers clear up issues quickly and easily. The bots save time and money over traditional customer service centers, which required lots of human bodies and usually left customers frustrated due to long hold periods. As the chatbots become increasingly strong in terms of emotional intelligence and empathy, they may also lead to even higher customer satisfaction scores than their human counterparts. In fact, Gartner estimates AI will count for 85 percent of customer relationships by 2020.
Granted, conversational systems are still being perfected, but huge strides are being made in understanding more complex sentences and requests. Microsoft is working on a natural user interface (NUI) that combines natural language with gestures, touch, and gazes, to help deepen the system conversations. As one writer noted, we can begin to imagine a search engine that does not require a screen or search box to find an answer—everything can be searched by sight, touch, or sound. That’s the kind of “conversation” I’m excited about.
Additional Resources on This Topic:
17 Tech Trends to Watch for in 2017
Time for Chatbots to Get Smart
When AI Chatbots Attack: What You Need to Know About Programming Empathy
This post was first published on Futurum.xyz
Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.