Your Artificial Intelligence Is Not Bias-Free

In Technology by Daniel NewmanLeave a Comment

Your AI is Not Bias-Free

Machines have no emotions. So, they must be objective—right? Not so fast. A new wave of algorithmic issues has recently hit the news, bringing the bias of AI into greater focus. The question now is not just whether we should allow AI to replace humans in industry, but how to prevent these tools from further perpetrating race and gender biases that are harmful to society if and when they do.

First, a look at bias itself. Where do machines get it, and how can it be avoided? The answer is not as simple as it seems. To put it simply, “machine bias is human bias.” And that bias can develop in a multitude of ways. For example:

  • Data-driven bias: If there is one objective lesson machines have learned, it’s this: garbage in, garbage out. Machines do not question the data they are given—they look for patterns within it. For instance, learning systems that were trained to predict recidivism rates in parolees showed blacks were almost twice as likely as whites to be considered high-risk reoffenders—yet whites were far more likely to be labeled low-risk and go on to commit other crimes. When the data is skewed by human bias, the AI results will be skewed, as well—in this case impacting something as serious as human freedom.
  • Interactive bias: By now, we’re probably familiar with the disaster that was Tay, Microsoft’s Twitter-based chatbot that turned into an aggressive racist after learning through interaction from his Twitter follows. When machines are taught to learn from those around them, they don’t decide which things to filter. They simply take it all in—for better or worse.
  • Emergent bias: Somewhat like interactive bias, emergent bias involves what happens via interaction over time. For instance, all of us on Facebook know we don’t always see the updates our friends post. That’s because Facebook has an algorithm that decides which posts we are most likely to want to see. Unfortunately, that often means there are a lot of things we never even know about—just because Facebook’s math equation decided against it.
  • Similarity bias: As the country deals with a new round of political issues and racism, this similarity bias is another huge issue. Similarity bias emerges when algorithms distort the content people see when looking for news and information online. As opposed to showing them all news options, it shows them the options they are most likely to agree with—a situation that further compounds political issues on both sides.

The question remains: what do we do about it? As I noted in my piece Artificial Intelligence: To Be Feared or Embraced?, one of the most maddening aspects of AI is that even the ones developing it don’t fully understand how it works. Yet, AI and machine learning seem to be on a bullet train, and most companies are showing no sign of stopping. I believe that as the awareness of AI bias and “math-washing” continues to evolve, so will the demand for greater transparency in AI development. After all, the algorithms major companies are using to feed us news and information are impacting the decisions we make in our businesses and personal lives.

A number of “watch-dog” organizations like AI Now are popping up to start the fight. But I anticipate that in the future, machine bias will be such a large issue that many companies will need to create completely new positions—bias detectors and algorithm analysts—to ensure that their AI is as bias-free as possible. And when I say “new positions,” I’m not talking about “tech positions.” After all, much of the bias currently found in algorithms is weighted toward males because they are largely the group creating the algorithms to begin with. I personally call for fewer technology-based writers and communicators who can explain algorithms in layman’s terms to bring greater transparency to customers, readers, and views so that we truly understand where the information we’re receiving is coming from.

I’d also venture to say that in the future, anyone hired for coding algorithms will need to be vetted for biases before they even start. Only in that way will a company really know what types of biases might pop up in their data—and try to prevent it.

Will we ever build truly objective machines? Not likely. So long as humans are involved in the process, some bias will exist. But what we can do—right now—is increase the transparency of every algorithm being used by publishing disclaimers and using simple language so the public truly understands the impact these algorithms are having on our daily lives. For many of us, it’s more than a matter of which Facebook posts we see every day. It could be a matter of which job we get—how much we make—and whether we see freedom. It seems obvious to me those aren’t decisions we should leave up to machines alone.

Additional Resources on This Topic:
The Ethics of Artificial Intelligence
Artificial Intelligence: To Be Feared or Embraced?
Artificial Intelligence and Automation: Predictions for the Future
How to Avoid the Dangers of Data-Bias

 

Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Leave a Comment