OECD AI principles

OECD Released New AI Principles: How Will They Impact the Ethics of AI?

In AI by Daniel NewmanLeave a Comment

OECD AI principles

Do we need global standards for the humane and ethical development of artificial intelligence (AI)? Members of the Organization for Economic Cooperation and Development think so. The group, made up of 36 member nations, recently adopted its own list of OECD AI principles designed to guide multinational cooperation for responsible stewardship of AI in coming years. (Click here for a full list of OECD member nations.) While the guidelines are not legally binding, they do make a major statement regarding technology and the growing global awareness that AI is far too powerful to leave unregulated.

The following are the five main OECD AI principles agreed upon by OECD members, as well as Argentina, Brazil, Colombia, Costa Rica, Peru, and Romania.

  • OECD AI Principle 1: AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being. For years, we’ve been discussing the ethics of AI here at Futurum. One of the biggest questions has always been how AI will impact the human workforce. The first principle sets a clear precedent that we as technology developers and world leaders must consider the human impact of AI development before we consider the business opportunity it could bring. As we’ve discussed before, AI has the power to cause unemployment, and to grow the gap between haves and have nots—especially on a global scale. That’s too important of an issue to go unspoken.
  • OECD AI Principle 2: AI systems should be designed in a way that respect the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society. We already know AI is not bias-free. Could it be? This principle seeks to at least illuminate the issue of governments and other entities potentially using AI to infringe upon human rights through discriminatory processes. It’s encouraging everyone to try to safeguard humans from that possibility, while also noting that AI should never be able to run rampant in society. Humans should always be able to intervene when necessary.
  • OECD AI Principle 3: There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them. This point goes back to the issue of transparency, privacy, and security. Does the public really understand the information being gathered on their behalf? Do they know how the algorithms will be used? Do they have any right to opt-out of data being gathered? Safe AI development cannot exist in an environment where the public doesn’t understand its implications.
  • OECD AI Principle 4: AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed. As we’ve discussed before, tech development isn’t a one-time thing, and neither is security. When we go about collecting data and teaching our systems to do something, we need to consider that it will be gathering and doing until we tell it otherwise. What are our plans to keep the data safe in the meantime? What are our consistent standards for scrubbing or dumping data? What are our plans for upgrading systems to ensure they aren’t compromised? All of these things need to be part of any smart AI discussion.
  • OECD AI Principle 5: Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles. This principle goes one step further in trying to pin responsibility for AI development on those creating it. After all, we can’t just develop self-learning technology and throw up our hands when it goes rogue. Those creating it need to know they’ll be held accountable for both the good and damage it may cause over time.

As I noted above, the OECD AI principles are not legally binding. There is no way to guarantee member nations or the technologists operating within them will adhere to the rules governing AI development. Still, it’s hopeful to see that so many nations in the world are coming together to recognize that ethical development of AI needs to be a priority now—before the technology gets too far out of the gate. After all, the changes we see from AI will most likely be irreversible, be in terms of job creation/elimination, automation, or robotic development. I commend the OECD team with compiling such a thoughtful list of directives to guide its development into 2020.

The original version of this article was first published on Futurum Research.

Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Leave a Comment