Once upon a time, the biggest threat posed by artificial intelligence (AI) was that it might create robots who would steal parts of our human workload. But today, as AI becomes even smarter—in many cases able to read social queues and emotions even better than humans—many are starting to worry AI could take over humanity altogether. Stephen Hawking once said AI would be either the best thing humans ever created—or the last. It seems that many questions remain about AI, and most revolve around the issue of ethics.
In 2015, Stanford University created “The 100 Year Study on Artificial Intelligence (AI100),” a project that will study the development of AI in major parts of human society, from transportation to healthcare, robots, and public safety. The plan is to update the study every five years to see how we progress as a human population in the new AI world.
No matter where you stand on the AI debate, the truth remains AI has done wonders for many of the world’s industries and holds promise in saving lives, energy, and money in almost every sector. More than just a technology that will make life easier, AI can change life altogether. It is creating safe self-driving cars, mobile healthcare, and robots that would make even the Jetsons jealous. But as many are starting realize, the human behind every AI is still just that: human. And no matter how smart the AI may be, it will still hold some of that human’s biases and imperfections. Those imperfections could be anything from bad decision-making to sexism to incomplete rules guiding AI behavior—all of which could seep into an AI’s “thoughts.”
Leaders at companies like Alphabet, Amazon, Facebook, IBM and Microsoft have already started to meet regarding the implications of AI in the area of jobs and warfare. Other leaders at MIT have been discussing the concept of “society in the loop” which would limit AI’s ability to make decisions without human interaction—especially the idea to take a life. But while we wait for a conclusive standard in the industry, it’s best to proceed with ethical caution. The following are a few tips to keep in mind as you explore the explosive field of AI.
While the world continues to debate whether AI can be trusted to safely drive us, perform surgery on us, and teach us, your company owes it to the world to create your own standards for the development of AI. Create an ethics committee within your company and be open with shareholders and your entire staff and customer base about you consider to be ethical (or not). How much power are you willing to give your AI? Is there a limit to how “smart” it should be? Consider societal implications from every angle. Further, include a member of the ethics committee from the get-go in design and development. Make ethics an important part of your AI design strategy.
AI is sexy. It’s thrilling to develop tech that is smarter and faster and more advanced than any other on the market. But AI is also incredibly powerful. By giving robots the power to teach and learn on their own, we limit our own ability to teach and guide them. Be sure to keep public safety and humanity at the heart of your development decisions. Consider security issues, privacy issues, and of course safety issues before letting your AI loose on the market.
One of the biggest dangers of AI is that it gets released by well-meaning developers who believed they thought of every possible scenario their bot could face. Only problem: they didn’t. Perhaps they never consider a certain trigger, question, or situation their AI would encounter. And faced with no data to build from, it makes its own. For this reason, it is so incredibly important to create strong teams of both developers and QA experts who test for every possible scenario—and then test some more. The only risk is going to market too late is losing money. The risk to going too early is lost lives.
It’s no question AI is already outsmarting humans. AI has won chess tournaments, poker games, war strategies, and everything in-between. As such, we need to give it the respect it deserves. AI is not power to play around with. Says one writer, computers have speed, endurance, and data that humans just don’t—and when facing the most intelligent AI, we will always lose. As such, develop risk assessments but also remember to ask yourself, from a human perspective, if your creation is truly good for the world around you.
In 2015, the Stanford panel concluded there was “no cause for concern” that AI is an imminent threat to humankind. No man-eating robots. No man-seeking missiles. Two years later, I’d say those possibilities are not that far off. Whether or not we allow them to develop—that’s our choice.
Additional Resources on This Topic:
The Ethics of Artificial Intelligence
AI is Going to be Your New BFF
The Artificial Intelligence Market is Growing Rapidly. Read This to Get Up to Date
This post was brought to you by IBM Global Technology Services. For more content like this, visit IT Biz Advisor.
Photo Credit: DigitalEpicness Flickr via Compfight cc
In this guest contribution from Steve Vonder Haar, Senior Analyst with Wainhouse, a Futurum Group…
In this guest contribution from Craig Durr, Senior Analyst with Wainhouse, a Futurum Group Company,…
Futurum's Daniel Newman dives into the recent announcement coming out of Micron, that they will…
Futurum analyst Michael Diamond recaps the Amazon Devices and Services event and reviews some of…
Futurum senior analyst Steven Dickens provides his take on the latest announcements coming out of…
Futurum’s Ron Westfall and Daniel Newman examine Micron’s financial results for the fourth quarter 2022…