Technology

Did Facebook Just Stop the Skynet of Chatbots? No

According to multiple reports over the past week, Facebook recently shut down its AI-fueled chatbots after they created a secret language for communicating—a language that was not of human origin. This is part of the Facebook Artificial Intelligence Research (FAIR) project designed to create bots that can “negotiate” with other bots and humans.

What really happened? What does this mean?

While this sounds freakishly ominous, Facebook is far from alone in this debate. Google’s DeepMind recently experienced a similar event when it spawned a new shorthand language, and a number of research labs (including Elon Musk’s OpenAI) are currently working on similar negotiation systems, considered a key step in the development of AI.

Why did this particular announcement gain traction? Click-bait titles that suggested Facebook, in a state of panic, had “terminated” the bots in an attempt to prevent Skynet:

As a result, there is now a growing vocal chorus of people looking to reign in AI research.

What do we think about it (who benefits from it and are we pro/con/neutral)?

At its core, this is exactly the result AI researchers are looking for—the ability of bots to learn and innovate in a controlled environment, characterized by researchers routinely dialing-up/down the strength of certain characteristics. The experiment referenced in these articles was part of a program designed to teach bots to negotiate with one another, a precursor to developing bots that can act more human during bot-human engagements. And it was successful.

There is tremendous value in this type of learning, not just from a business perspective but from a human user perspective, through more personalized and increasingly valuable user experiences.

In the case of Facebook, no axes or machetes were deftly applied to the network, and we’re taking a positive perspective on this event. The bots involved used both language shortcuts (to improve efficiency) and deception (to better negotiate to achieve the programmed end result).

What should readers do/think as a result, and watch for?

As pointed out in the CNET article, the creating of “secret” languages is something that children often do as they learn how to communicate. AI bots are no different—they’re children who are still learning how to learn.

The risks of the AI “black box,” where we know the inputs and outputs, but not how the AI and machine-learning magic is performed, are well known.

There is clearly a need to better educate the public, and businesses, on the realities of AI, machine learning, and bots. But there also exists a need for the public to educate researchers and industry on what is considered acceptable, particularly from a social perspective, as we start injecting more technology into our business decision and customer support systems.

This isn’t about bots replacing humans, it’s about bots providing a more human experience.

Photo Credit: derekallenmullen Flickr via Compfight cc

This article was first published on Futurum.xyz

Fred McClimans

Fred is an experienced analyst and advisor, bringing over 30 years of experience in the digital and technology markets. Most recently, Fred launched the equity research team at Samadhi Partners and provides marketing strategy through the Wasabi Rabbit digital agency. He previously served as an EVP and Research Vice President at HfS Research, launching its Digital Trust practice and coverage of emerging "trust-enabling" technologies. Fred previously founded analyst firms Current Analysis, a global tech-sector competitive intelligence and market advisory (acquired by PDM), Decisys, an analytical consultancy (acquired by the Burton Group/Gartner), and the Aurelian Group. Fred’s analyst and consulting experience also includes Gartner and E&Y, with technology experience at Newbridge Networks’ Advanced Technology Group (now Alcatel) and DTECH LABS (now part of Cubic Corporation). Fred studied engineering and music at Syracuse University. A frequent author and speaker, Fred has served as a guest lecturer at the George Mason University School of Business (Porter: Information Systems and Operations Management), keynoted the Colombian Associación Nacional De Empressarios Sourcing Summit, served as an executive committee member of the Intellifest International Conference on Reasoning (AI) Technologies, and has spoken at #SxSW on trust in the digital economy. His analysis and commentary has appeared through venues such as Cheddar TV, Adotas, CNN, Social Media Today, Seeking Alpha, Talk Markets, and Network World (IDG).

Share
Published by
Fred McClimans

Recent Posts

Google Faces Renewed Net Neutrality Battle in Europe

In this guest contribution from Steve Vonder Haar, Senior Analyst with Wainhouse, a Futurum Group…

3 years ago

Poly Announces Poly API Marketplace Utilizing RapidAPI Designed to Provide an Assist to its Developer Community

In this guest contribution from Craig Durr, Senior Analyst with Wainhouse, a Futurum Group Company,…

3 years ago

Micron to Build $100 Billion Chip Factory in New York

Futurum's Daniel Newman dives into the recent announcement coming out of Micron, that they will…

3 years ago

The Amazon Devices and Services Event Did Not Disappoint: It’s Clear Amazon’s Focus is on Making Consumers’ Lives Better With its New Smart Home Devices

Futurum analyst Michael Diamond recaps the Amazon Devices and Services event and reviews some of…

3 years ago

Red Hat Announces Latest Version of OpenStack — Red Hat OpenStack Platform 17 — at MWC Las Vegas

Futurum senior analyst Steven Dickens provides his take on the latest announcements coming out of…

3 years ago

Micron Shows Resilience Across Q4 2022 and Full Year Fiscal 2022 Results

Futurum’s Ron Westfall and Daniel Newman examine Micron’s financial results for the fourth quarter 2022…

3 years ago