What happens when Hollywood-level special effects can be mastered by any tech-savvy individual?
Sometimes, the results can be unbelievably imaginative, leading to creative output that can entertain, educate, or inspire.
But things can also take a sinister turn. That’s where deepfake technology seems to be headed, and it has many people concerned about its deceitful possibilities.
Perhaps most troubling is the potential use of deepfake content in disinformation campaigns.
Fake news is already a serious threat, and deepfakes could make this type of propaganda more convincing than ever before.
If you’re not familiar with deepfakes yet, you’ll likely come across them sooner rather than later.
That’s why it’s so important to read up on the ins and outs of this relatively recent internet-fueled phenomenon.
What is the definition of deepfake?
Deepfakes are images or videos which have been altered to feature the face of someone else, like an advanced form of face-swapping.
Although there are some deepfake videos which are very clearly doctored and inauthentic, most look and sound convincingly real.
The most common use of this so far has been deepfake pornography in which the face of an actor in a pornographic video is replaced with that of a celebrity.
Deepfake technology has also been used to contribute to fake news, hoaxes, revenge porn, and other types of deception.
For example, a deepfake video could feature a politician saying things they never stated through image and audio manipulation, like this video of President Obama.
A number of deepfakes have also been developed for creative or entertainment purposes. For example, a video was created to show what it might be like if John Krasinski had played the role of Captain America in the Marvel movie franchise.
In both of these video examples, the deepfake technology is no secret. Viewers are given a peek at the original video and audio sources to understand how the videos were made.
But not all deepfakes are presented this way. Many are created with the intention of misleading the viewer to believe that what they are seeing is real.
Why are they called deepfakes?
The names come from a combination of two terms: “deep learning” and “fake.”
Deep learning is a type of machine learning based on artificial neural networks (ANNs).
The artificial intelligence algorithm used to create deepfakes is enhanced by generative adversarial networks (GANs), which in this case has two neural networks working in tandem: one “generator” that creates the images, and another “discriminator” which determines how real or fake they appear.
Once the data set has been analyzed on multiple layers (which is where the “deep” part of the name originates), it can then be extracted to be applied to other media.
For instance, certain types of deep learning software could analyze existing videos of a celebrity for 3D facial mapping, which aids in recreating realistic movements and familiar mannerisms and expressions in a deepfake video.
Similarly, recorded audio of someone speaking could be processed for cadence, tone, and other speech patterns, allowing a deepfake creator to produce synthetic audio.
As artificial intelligence technology has become increasingly sophisticated, it has also become easier than ever to create deepfakes capable of deceiving viewers.
Who created deepfakes?
The technology used to make deepfakes has been in development for several decades.
However, the name originated in 2017 after a Reddit user who went by the name “deepfakes” posted several altered pornographic videos featuring famous actresses.
That spawned the creation of a dedicated subreddit called r/deepfakes, which quickly gained plenty of online traffic.
Users visited the site to post and view other doctored videos, many of which featured celebrities’ faces swapped onto other people’s bodies.
Eventually, Reddit and other sites began to crack down on these videos, particularly deepfake pornography.
How can you identify deepfakes?
Earlier deepfake videos, even some from just a few years ago, are easier to detect as doctored.
Viewers might notice that a voice isn’t matching up with mouth movements, or that something about the person’s face appears off in an almost unsettling way.
Feeling that way while watching a video could potentially indicate that it’s a deepfake.
These days, however, deepfake detection isn’t always so obvious. The software is more advanced than ever before, so deepfakes are becoming harder to identify.
One thing to watch for is the quality of a video. Often, a deepfake will feature lower-quality footage in an attempt to hide the elements of the video that might otherwise reveal its inauthentic nature.
Other tips for detecting deepfakes include:
- Consider whether the source is reputable.
- Find out if the video is available elsewhere online (and if so, whether those sources can be trusted).
- Slow down the footage to look for strange movements, particularly around the speaker’s mouth.
- Notice whether eye movements look strange or if there is a lack of blinking.
- Look for inconsistencies with lighting and shadows.
- Listen for audio issues, and watch for lip-syncing problems.
- If it’s an image rather than a video, do a reverse image search online to see if similar photos come up.
Helpful examples can be found on MIT Media Lab’s Detect Fakes website.
Computerized detection programs
The Defense Advanced Research Projects Agency (DARPA), which is part of the U.S. Department of Defense, is looking into better ways to identify deepfakes.
Because they are becoming more difficult to detect with the human eye, experts at DARPA are developing computers that can automatically identify deepfakes.
The team at DARPA trains the computers to do this by having them compare original and doctored videos and finding the discrepancies between the two.
Other organizations are also investigating ways for computers to automatically detect deepfakes in an effort to prevent disinformation and other potential risks associated with these videos.
Are deepfakes legal?
Convincing deepfakes are easier to create than ever before, especially with software like FakeApp and DeepFaceLab.
Most of the laws surrounding deepfakes are murky, however. Only a few states have taken steps to impose legal repercussions for creating and disseminating deepfake images and videos.
In 2019, Texas created a law that prohibits the use of deepfakes that could influence local, state, and U.S. elections. That same year, Virginia banned deepfake pornography.
But similar laws are few and far between. Many legal experts assert that lawsuits involving deepfake videos would likely be subject to First Amendment challenges.
From cyber harassment to defamation to financial fraud, there are a number of potential crimes which could be committed with the assistance of deepfake technology.
But so far, there are few legal protections in place, and even fewer legal precedents.
What are the risks of deepfakes?
There are a number of serious problems which can arise from the creation of deepfakes.
The proliferation of deepfakes online makes this an especially pressing issue.
According to a report by Deeptrace, the number of deepfake videos nearly doubled between December 2018, when there were only 7,964 videos found, and September 2019, when that figure had ballooned to 14,678.
Although this type of video represents huge leaps in technological capabilities, it can also create confusion or even chaos depending on how it’s used.
The following represent some of the biggest threats posed by deepfake videos.
Currently, one of the most concerning things about the type of synthetic media created with deepfakes is the potential for a widespread disinformation campaign.
There is a growing portion of the population that has become mistrustful of what they refer to as the mainstream media.
These individuals are prime targets for deepfakes that align with their worldview.
Today, most of the disinformation strategies employed in deepfakes are used to fuel conspiracy theories.
But the potential for more serious ripple effects is certainly there.
For example, last year, a deepfake video featuring Belgium’s Prime Minister Sophie Wilmès showed her giving an official address.
In that doctored speech, she states that COVID-19 and other diseases like Ebola have been caused by “exploitation and destruction by humans of our natural environment”—something the Belgian leader never actually said.
Videos like these can potentially cause panic or incite anger, which is exactly what happened in Gabon in 2019.
The country’s president appeared in a video that many suspected could be a deepfake. That fueled unrest, ultimately leading to an attempted coup.
This type of disinformation is one of the biggest national security concerns and has led to efforts like the one at DARPA to get deepfakes under control before it’s too late.
Another risk involves the use of deepfakes to discredit or malign someone by essentially putting words into their mouth or showing them doing something scandalous or incriminating—almost like digital forgeries.
Pornographic deepfakes continue to be the biggest issue in this category. According to Deeptrace, 96 percent of the deepfake videos online in 2019 featured pornographic content.
Other forms of misrepresentation have gained widespread attention, however.
Take this example of Mark Zuckerberg featured in a deepfake where he makes ominous, borderline threatening statements, saying, “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”
The video was produced in response to Facebook’s refusal to prioritize the removal of deepfakes on their own social media platform, so most people who saw it were aware that it wasn’t real.
However, it’s easy to see how this type of misrepresentation could create havoc for someone personally and professionally.
These videos can potentially fool anyone, including those who wield a lot of power.
President Donald Trump actually retweeted a video of House Speaker Nancy Pelosi which had been edited to make it appear as though she was having difficulty speaking coherently, thereby calling her mental capabilities into question.
The altered video was originally released by Fox Business, which came under fire for misrepresenting Pelosi—but only after millions had already seen and been influenced by its content.
As more people become aware of deepfakes and their potential to deceive viewers, it also creates a risk for a sort of reverse fake news—that is, claims that authentic videos or audio recordings are not real.
That actually happened with the Access Hollywood tape that featured President Trump bragging about sexual assault.
Although he initially acknowledged that it was him on the tape, he later tried to backtrack, denying that it was real.
There is already widespread mistrust of many media sources. When the information presented by those sources doesn’t fit the messages that others wish to send, someone could potentially categorize certain images or videos as “deepfakes” in order to discredit them.
More technological devices are using facial recognition or voice recognition than ever before.
While it seems like one of the most secure ways to protect a device, the creation of deepfakes calls that into question.
For example, perhaps your Apple iPhone, Google Pixel phone, or Microsoft Windows phone uses one of these features to unlock the device. Maybe your Amazon Alexa, Google Nest, or other smart home device recognizes your voice when acting as an assistant.
But what if someone could recreate your likeness or voice? There are increasing concerns that this new technology puts device security at risk.
What are some positive ways to use deepfakes?
Public opinion has shifted from the initial amusement at deepfakes to a growing wariness in recent years.
But there are actually quite a few ways that deepfake videos can actually be beneficial.
When used in certain settings, the technology can absolutely be harnessed for positive purposes, as in the examples below.
Filmmakers can use deepfake technology’s video editing capabilities for enhancing storytelling.
The same types of artificial intelligence used for creating deepfakes can be applied in cases like the filming of Furious 7, an installment in the Fast & Furious franchise.
Furious 7 star Paul Walker died during the course of filming. To create his remaining scenes, his brothers Caleb and Cody Walker were hired as stand-ins.
The special effects team then applied a computer-generated face and voice in the manipulated shots.
Because they had an extensive library of existing film and audio of Paul Walker to work with, this resulted in a more realistic final product.
Deepfake technology can be used as a tool for special effects in film and television that allows audiences to become more immersed in a story.
Besides Hollywood visual effects, the AI algorithms used to create deepfakes can also teach us new things in an engaging way.
Museums could create videos of historical figures who have been dead for decades or even centuries, much like how the Dalí Museum in Florida developed a life-size deepfake of artist Salvador Dali.
Educational messages can be transmitted around the world, as in a video about ending malaria that used deepfake technology to allow soccer star David Beckham to speak fluently in nine different languages.
In these ways, deepfakes have the potential to provide a creative way to educate and inspire.
Deepfake technology has also become a useful tool in medical research.
For example, the AI algorithms can be used to create fake patient data that mirrors real-world data, which assists in researching how to diagnose and treat diseases without sacrifices to patient privacy.
Similarly, this tech can produce fake MRI scans that help train computers to detect tumors and other abnormalities, thereby producing more accurate diagnoses with real patients.
Though deepfakes have rightly caused some to be concerned about misinformation, privacy breaches, and other related issues, the technology can also be used for beneficial purposes.