Intelligent machines are one of the most controversial types of technology in development today. And I hate to break it to those of you who are concerned about the use of AI-powered robots, but intelligent machines aren’t going anywhere. In fact, they’re only getting smarter.
Recently, leaders like IBM and Alphabet have been pushing AI to its outer limits, teaching it to not only master processes or answer simple questions, but to actively engage in arguments and develop its own “imagination.” In essence, technologists are trying to teach intelligent machines to do the things that—in the past—techies have assured us only humans can do.
And right now, it seems they’re being trained to do just about everything.
Arguments for Intelligent Machines
If you ask IBM, I’m sure they would say their Project Debater technology could be a huge benefit to the human population, especially in the business world. In a recent live debate, an AI machine was able to argue in favor of space exploration being publicly subsidized. Not only that, the machine was able to make an intelligent rebuttal to its opponent’s argument and craft a closing statement on its own. Imagine having the ability to use AI to develop arguments in favor or against a certain business deal, new hire or product launch? Brilliant!
The thing that concerns me about Project Debater isn’t so much its ability to make an argument. (As of now, the AI simply crafts a statement based off information it finds on Wikipedia—hardly a threat to world safety.) The more concerning point for me is that the AI has no idea what it’s talking about. As such, it has no ability to make a value judgement—something that, at this point, only humans can and should do. Without that ability, the technology could be powered to cause turmoil online, be it through cyber-bullying or security threats or simply leading businesses down a value-less rabbit-hole of “reasoning.”
Intelligent Machines: Thinking Like Humans?
If you are worried about intelligent machines’ lack of value systems, fear not: researchers are also working on building other human qualities into AI moving forward. Some for instance, believe they can teach robots “common sense,” working to find ways to help AI learn like a human baby learns—with instincts that guide its learning. They feel confident robots will be able to learn the concepts of causality (if I do this, that will happen); cost/benefit (value judgment of which has more positives than negatives); and even types vs. instances (the ability to recognize that something is “mine,” rather than something that belongs to everyone. The values in business would be—well, transformational. Rather than simply speeding up processes or helping companies automate, they could actually lead companies in new directions—better than their own leaders do.
Scared yet? You probably should be.
Arguments Against Intelligent Machines
I can understand researchers’ desire to code “instincts” into AI. After all, the world is a chaotic place. For bots to be remotely useful, they’ll need to be able to adapt in the moment just like a human would. They need to be able to make reasonable judgment calls—judgment calls that we as humans can trust to be in our best interest, or the interest of those we love.
Alphabet’s DeepMind, for instance, is developing technology that can look at just a few angles of an object—or eventually a situation—and be able to “imagine” what the other angles look like. In essence, it helps bots “fill in the blanks” to get a better idea of what they’re facing in a particular environment or situation. Still, is that really enough to guarantee that intelligent machines will continue to work for the benefit of humanity—even if and when they eventually outsmart us?
Many, including tech genius Elon Musk, say “no.” In fact, Musk has actually been one of the most outspoken opponents of AI-powered robots, arguing that they are a “fundamental risk to the existence of civilization.” Why? Because none of us really know how AI works—and if we don’t know how it works, we’ll have our hands tied if we ever need to stop it.
If I were to make a closing argument to Project Debater, it would be this: intelligent machines have a lot of power. But they should never have more than humans do. All of us—technologists and business people alike—would do well to remember this.
The original version of this article was first published on Futurum.
Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.