Machines have no emotions. So, they must be objective—right? Not so fast. A new wave of algorithmic issues has recently hit the news, bringing the bias of AI into greater focus. The question now is not just whether we should allow AI to replace humans in industry, but how to prevent these tools from further perpetrating race and gender biases that are harmful to society if and when they do.
First, a look at bias itself. Where do machines get it, and how can it be avoided? The answer is not as simple as it seems. To put it simply, “machine bias is human bias.” And that bias can develop in a multitude of ways. For example:
The question remains: what do we do about it? As I noted in my piece Artificial Intelligence: To Be Feared or Embraced?, one of the most maddening aspects of AI is that even the ones developing it don’t fully understand how it works. Yet, AI and machine learning seem to be on a bullet train, and most companies are showing no sign of stopping. I believe that as the awareness of AI bias and “math-washing” continues to evolve, so will the demand for greater transparency in AI development. After all, the algorithms major companies are using to feed us news and information are impacting the decisions we make in our businesses and personal lives.
A number of “watch-dog” organizations like AI Now are popping up to start the fight. But I anticipate that in the future, machine bias will be such a large issue that many companies will need to create completely new positions—bias detectors and algorithm analysts—to ensure that their AI is as bias-free as possible. And when I say “new positions,” I’m not talking about “tech positions.” After all, much of the bias currently found in algorithms is weighted toward males because they are largely the group creating the algorithms to begin with. I personally call for fewer technology-based writers and communicators who can explain algorithms in layman’s terms to bring greater transparency to customers, readers, and views so that we truly understand where the information we’re receiving is coming from.
I’d also venture to say that in the future, anyone hired for coding algorithms will need to be vetted for biases before they even start. Only in that way will a company really know what types of biases might pop up in their data—and try to prevent it.
Will we ever build truly objective machines? Not likely. So long as humans are involved in the process, some bias will exist. But what we can do—right now—is increase the transparency of every algorithm being used by publishing disclaimers and using simple language so the public truly understands the impact these algorithms are having on our daily lives. For many of us, it’s more than a matter of which Facebook posts we see every day. It could be a matter of which job we get—how much we make—and whether we see freedom. It seems obvious to me those aren’t decisions we should leave up to machines alone.
Additional Resources on This Topic:
The Ethics of Artificial Intelligence
Artificial Intelligence: To Be Feared or Embraced?
Artificial Intelligence and Automation: Predictions for the Future
How to Avoid the Dangers of Data-Bias
In this guest contribution from Steve Vonder Haar, Senior Analyst with Wainhouse, a Futurum Group…
In this guest contribution from Craig Durr, Senior Analyst with Wainhouse, a Futurum Group Company,…
Futurum's Daniel Newman dives into the recent announcement coming out of Micron, that they will…
Futurum analyst Michael Diamond recaps the Amazon Devices and Services event and reviews some of…
Futurum senior analyst Steven Dickens provides his take on the latest announcements coming out of…
Futurum’s Ron Westfall and Daniel Newman examine Micron’s financial results for the fourth quarter 2022…