If you’ve had enough of fake news, brace yourself. An eruption of deepfake technology is spreading across the internet, and soon digitally doctored videos might be the new norm on our social media pages. How do we handle this new technology that has the power to compromise everything from personal privacy to national security? Do we ban it? Monitor it? Find the good aspects? Or simply see where it leads us?
What is a Deepfake?
A deepfake is generally an AI manipulation of content that is made by splicing two or more people together. For instance, a user might splice together footage of a famous actress with footage from a porn video and present it is a legitimate “sex tape” of the stars. The University of Washington distributed a video of Barack Obama, in which they were able to make him say literally anything they wanted him to.
Recently, however, companies have even figured out how to create videos from a single photo—meaning someone could use a photo from your social media feed to create a video of you saying or doing any number of things, without your permission. An app called DeepNude gave users the ability to upload the clothed photo of a woman and create their own nonconsensual porn. Clearly deepfakes are getting out of control and pose a clear threat to us. But until now, we haven’t found a way to manage or prevent them.
Managing Digital Representations in Rapid Digital Transformation
It’s believed that within the next 12 months, deepfakes will be visually undetectable, ironically forcing us to rely on AI to help determine what is fake and what is real by looking for inconsistencies, time stamps, and other methods of verification. The issues bring up a number of ethical questions. Yes, legitimate use cases for deepfakes exist. But does the value of those use cases outweigh the privacy issues any of us could encounter thanks to deepfake technology?
For instance, the most viable use cases for deepfake technology currently belong to retail. The sector is using deepfake technology to place consumers into digital branding campaigns, virtual dressing rooms, and to allow them to explore their products in the real—er, virtual—spatial web. Conceivably, that’s good news for retail companies, which stand to gain a few more customers drawn to the opportunity to see themselves walking the runway in fancy clothes or driving through town in a fancy car. Other potential uses could be a movie producer using past footage of a scene to recreate a better version without the actor needing to be present. But is that gain valuable enough to legitimize the risks?
Actress Jameela Jamil has been on a campaign to eliminate photo retouching in acting and modeling to prevent young people from developing a distorted view of what they are supposed to look like. The issue of deepfakes could make that problem so much worse, as the line between real and digital becomes so much more confusing. Where does one’s digital self begin and end? Who owns in? Do experiences in the spatial web mean as much as those in real life? Do we have to pay for them? If we hand over our personal image to companies, how will they manage and protect—rather than sell or circulate—it?
There are no easy answers when it comes to deepfakes, and for the most part, there is no clear plan to eliminate them, even on the part of social media giants like Facebook. For instance, Facebook famously refused to delete a deepfake video of House Speaker Nancy Pelosi, even after it was determined to be a fake. Showing that they’re consistent in their willingness to accept deepfakes online, Facebook also refused to remove a deepfake video of CEO Mark Zuckerberg that was updated to Instagram in which he claims he’ll be able to control the future thanks to data stolen from his social media throne.
Find the Opportunities
While there are scary uses of deepfake technology, just like most technologies, I think we need to focus on the opportunities. Medical researchers are starting to use deepfakes to train AI to be able to identify certain diseases. Adobe is working on an AI that can be used to identify deepfakes. This technology could potentially push us to develop other technologies. We need to focus on the developers who are pushing the boundaries, in a positive way, with this tech before we grab our pitchforks and torches to shut it down. Yes, it’s obvious that we need to start creating some clear ethical boundaries surrounding the creation of deepfake content and the definition of “entertainment” itself, but I think it’s still too early to eliminate the technology completely.
Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.
Check out some of my other articles:
Photo Credit: CBS News
The original version of this article was first published on Futurum Research.
Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.