The True Problem Artificial Intelligence Will Face

In Technology by Daniel NewmanLeave a Comment

The True Problem Artificial Intelligence Will Face

Thanks to Hollywood movies and news stories about AI taking over our jobs, artificial intelligence is fighting an uphill battle with public relations. For every one article about how transformative AI can and will be in our lives, it seems there’s another that describes why AI is a dangerous threat. Brilliant minds like Stephen Hawking and Elon Musk have expressed concerns about AI’s unleashed power and how it could ultimately—negatively—impact human life. And just this month, MIT Technology Review published a troubling article about the fact that we—meaning, the highly-intelligent, trained developers driving AI—don’t actually know how the deep learning algorithms supporting AI work, proving the concerns surrounding AI aren’t just based on Hollywood movies—they’re grounded in real life.

The MIT article highlighted a self-driving car powered by deep learning, a powerful technology that some hope will someday cure disease, making high-value financial trading decisions, and improve our quality of life. The problem it presents, however, is that if developers do not yet fully understand the technology they’re meddling with, it’s hard for them to anticipate or be accountable for its failures. And with AI, the failures could be big and costly, both in terms of money and human life.

The self-driving car—and the admission that humans do not fully understand it—magnifies the fact that—though incredibly powerful—AI is still relatively new and misunderstood. It seems businesses in the AI field have much more than complex technology to manage; they also have a potential PR nightmare on their hands. The following are just a few reasons why AI is not being fully embraced in the greater tech—or human—communities.

It’s Getting a Bad Rap

The pace AI has developed is concerning to even some of the top names in technology. Folks like Elon Musk, Bill Gates, Stephen Hawking and others have already expressed concerns for AI’s rapid development. In 2015, Musk even donated $10 million to the Future of Life Institute to make sure AI doesn’t turn evil. In fact, despite many promising developments—including the completely nonthreatening use of AI in creating recommended viewing, reading, and purchases on Amazon, Google, and Netflix—many still remain concerned about what AI has in store for the human race.

It’s Learning to Make Decisions—for Better or Worse 

While some are excited regarding AI’s potential to help diagnose disease or more quickly manage parole decisions or investment selections, others caution that handing decision-making abilities over to robots—including, in the military’s case—the decision to take a life—is irresponsible at best. According to one BBC article, Danah Boyd, a principle researcher at Microsoft Research, noted there remain concerns about “the values being written into such systems—and who is ultimately responsible for them.” When we hand decision-making over to robots, the situation becomes even more fuzzy.

There is Still Not Universal Ethics Panel or Standard Governing AI’s Growth

Although certain organizations like Google have their own ethics boards governing the development of AI technology within their own realm, the fact remains that there is still no universal body ensuring the ethical advancement of AI. For many, this remains a huge concern, as there is no telling what type of AI robot could be developed if the technology gets into the wrong hands. The result could be lack of jobs, lack of relationships, or—as Hawking tends to believe—and end to human society itself.

Wherever you stand on the AI issue, the fact remains that the technology isn’t going anywhere but up—albeit a bit more slowly than many believe. I’d encourage those troubled by the development of AI to consider that AI’s development thus far has been fractured at best. The shocking technologies we see in the media are developed independently, and none have yet been proven to work together to create the super-human force we’re fearing. That isn’t to say the opportunity isn’t there—it’s just that we aren’t there yet. There is still time to find the needed checks and balances to allow AI to experience a positive public embrace. That, provided an ethics committee is formed, and we’re able to show humans the big picture of how this amazing technology can, in fact, continue to help—not hurt—the human race.

Additional Resources on This Topic:
The Ethical Side of Artificial Intelligence
Artificial Intelligence and Automation: Predictions for the Future
AI is Going to Become Your New BFF

Photo Credit: vic8761 Flickr via Compfight cc

Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Leave a Comment