According to multiple reports over the past week, Facebook recently shut down its AI-fueled chatbots after they created a secret language for communicating—a language that was not of human origin. This is part of the Facebook Artificial Intelligence Research (FAIR) project designed to create bots that can “negotiate” with other bots and humans.
While this sounds freakishly ominous, Facebook is far from alone in this debate. Google’s DeepMind recently experienced a similar event when it spawned a new shorthand language, and a number of research labs (including Elon Musk’s OpenAI) are currently working on similar negotiation systems, considered a key step in the development of AI.
Why did this particular announcement gain traction? Click-bait titles that suggested Facebook, in a state of panic, had “terminated” the bots in an attempt to prevent Skynet:
As a result, there is now a growing vocal chorus of people looking to reign in AI research.
At its core, this is exactly the result AI researchers are looking for—the ability of bots to learn and innovate in a controlled environment, characterized by researchers routinely dialing-up/down the strength of certain characteristics. The experiment referenced in these articles was part of a program designed to teach bots to negotiate with one another, a precursor to developing bots that can act more human during bot-human engagements. And it was successful.
There is tremendous value in this type of learning, not just from a business perspective but from a human user perspective, through more personalized and increasingly valuable user experiences.
In the case of Facebook, no axes or machetes were deftly applied to the network, and we’re taking a positive perspective on this event. The bots involved used both language shortcuts (to improve efficiency) and deception (to better negotiate to achieve the programmed end result).
As pointed out in the CNET article, the creating of “secret” languages is something that children often do as they learn how to communicate. AI bots are no different—they’re children who are still learning how to learn.
The risks of the AI “black box,” where we know the inputs and outputs, but not how the AI and machine-learning magic is performed, are well known.
There is clearly a need to better educate the public, and businesses, on the realities of AI, machine learning, and bots. But there also exists a need for the public to educate researchers and industry on what is considered acceptable, particularly from a social perspective, as we start injecting more technology into our business decision and customer support systems.
This isn’t about bots replacing humans, it’s about bots providing a more human experience.
Photo Credit: derekallenmullen Flickr via Compfight cc
This article was first published on Futurum.xyz
In this guest contribution from Steve Vonder Haar, Senior Analyst with Wainhouse, a Futurum Group…
In this guest contribution from Craig Durr, Senior Analyst with Wainhouse, a Futurum Group Company,…
Futurum's Daniel Newman dives into the recent announcement coming out of Micron, that they will…
Futurum analyst Michael Diamond recaps the Amazon Devices and Services event and reviews some of…
Futurum senior analyst Steven Dickens provides his take on the latest announcements coming out of…
Futurum’s Ron Westfall and Daniel Newman examine Micron’s financial results for the fourth quarter 2022…