As deepfake video technology has become increasingly sophisticated, threats to the security of individuals and businesses have increased as well.
This goes well beyond the abilities of Adobe Photoshop and other straightforward editing apps. Instead, deepfakes require the use of artificial intelligence technology and neural networks.
In recent years, these videos have largely targeted celebrities, world leaders, and other recognizable figures.
However, the advanced open-source algorithms that power deepfakes are more accessible than ever before, which means that more people could potentially be a target for these cyberattacks.
Fortunately, there are ways to reduce your risk of falling victim to these digital scams. That’s why it’s important to learn more about your options for outsmarting deepfake technology and minimizing cybersecurity threats.
Check out some examples of deepfakes and get some of the best strategies for defending yourself from deepfakes, whether it’s protecting your personal identity or knowing how to spot fake video and audio.
High-profile deepfake examples
Deepfakes have frequently made headlines over the years. Here are some prime examples:
- Barack Obama deepfake: This deepfake features a video of President Obama with Jordan Peele doing an impersonation of his voice. It was created to warn viewers about fake news online.
- Mark Zuckerberg deepfake: This video makes it appear as if the Facebook founder is implying that he intends to wield control with stolen data. It brought awareness to Facebook’s lack of action in removing deepfakes from its platform.
- Nancy Pelosi deepfake: A video of House Speaker Pelosi was doctored to make it seem like she was impaired. It was spread widely and caused many to question her ability to perform her work, even after being debunked.
- Donald Trump deepfake: A political group in Belgium spread a deepfake video of Donald Trump in which he calls on the country to exit the Paris Climate Agreement. It was intended to bring awareness to climate change issues, but many viewers didn’t realize it was fake.
- Gabon president deepfake: A video of the country’s president was supposed to quell concerns about his lack of public appearances. But many suspected that it was a deepfake, and the confusion ultimately contributed to an attempted coup.
Protect your online image
Think about how many photos you’ve uploaded online over the years. The number may be in the hundreds or even the thousands.
While it’s great to have easy access to all your favorite memories, those same images could put your identity at risk online.
In order to create a deepfake, a huge data set of images and/or videos of the intended target are required.
That’s why deepfake creators primarily targeted prominent individuals at first. It’s simply easier to access photos and videos of famous people.
But many average internet users now have a significant number of photos and videos posted to their social media accounts and other websites, which puts them at a higher risk for deepfake attacks.
With the right technology, someone could potentially produce deepfakes featuring your likeness.
Some of these deepfakes can be highly convincing. One study found that deepfakes can potentially outsmart facial recognition technology.
Even the application programming interfaces (APIs) used by major corporations like Amazon and Microsoft were fooled by the use of deepfakes.
In fact, up to 78% of the deepfakes generated for the study were able to fool Microsoft’s Azure Cognition Services API.
That’s why it’s so important to limit the audience for your images and videos.
Apply privacy settings
One of the most important steps to limiting access to your image is to minimize the number of personal photos you upload.
The fewer photos and videos of you available, the less likely it is that a convincing deepfake can be produced using your image.
For the photos and videos you do want to share, make sure you have privacy settings in place so they can’t be viewed by the general public.
For example, you can set up your Instagram account to be private, which means other users have to ask for your permission before they can see your posts and stories.
On Facebook, you can limit who can see photos and videos you upload. You can also opt for Facebook to not use facial recognition technology to recognize you in photos and videos (under “Face Recognition” in your settings).
Other efforts to stop deepfake scams
Ultimately, there is not much you can do on an individual level to protect yourself from being featured in a deepfake.
However, it’s helpful to know that other efforts are in the works to minimize the harm that deepfakes can cause.
The potential risks associated with deepfakes are widely recognized, which is why it’s important that protections are put in place on a wider scale.
The following outline some of the major initiatives to protect all internet users from deepfake scams.
Legislation against deepfakes
A number of legal protections regarding deepfakes have been enacted in recent years.
These measures are specifically designed to address deepfakes created with the intent to commit harm against an individual or organization.
For example, the state of California has passed bills that ban the creation of deepfake pornography as well as the use of fake images and videos of political candidates that could potentially affect the outcome of an election.
The National Defense Authorization Act of 2021 has the power to act on a larger scale, with the Department of Homeland Security directed to analyze the technology behind deepfakes along with the potential threats they might pose to national security.
There is a potential for deepfakes to be categorized as harassment under the law as well. But there are few legal precedents so far, which limits how much protection people and businesses have against these attacks.
Hopefully, more legislation will be enacted to classify deepfakes as cybercrimes to ensure that those who create these videos with malicious intent will be punished appropriately for their actions.
Training computers in deepfake detection
Artificial intelligence methods such as deep learning, machine learning, and generative adversarial networks (GANs) are integral to the creation of deepfakes.
However, similar types of artificial intelligence can also be used as a tool in protecting against these inauthentic videos.
Certain groups are working on initiatives like these right now, including Microsoft, which has launched a new deepfake detection tool called Video Authenticator.
This tool gives a score on how likely a video is to be authentic, which could help limit the spread of fake news.
Another significant advancement in this area is the creation of the Defense Advanced Research Projects Agency, or DARPA, by the U.S. Department of Defense.
DARPA is looking for better ways to identify deepfakes in an attempt to prevent large-scale disinformation attacks.
One of the goals is to create tech programs that can find even the smallest flaws in deepfakes by feeding computers original and manipulated videos and training them to discover every discrepancy.
Over time, the technology can become more and more sophisticated, which reduces the likelihood that a deepfake could pass through these digital checkpoints without raising red flags.
What are some tips for protecting yourself against deepfakes?
Protecting your accounts with privacy settings is the best way to keep your image safe online so you’re less likely to be the direct victim of a deepfake attack.
But when it comes to deepfakes, it’s just as important to know how to spot them so you don’t fall victim to disinformation or fake news.
The following tips can help individuals and businesses identify what’s real and what’s not.
How can you protect yourself against deepfakes?
Average internet users don’t have access to advanced technology that can automatically detect deepfakes.
However, you can look for the following red flags that might indicate that what you see isn’t the real thing.
Consider the source
First, think about where the video is posted. Is it a reliable source? Can you trust that this source fully vets the media that is posted?
If you’re still not sure, search to see if the video is posted elsewhere online, preferably from trusted sources. Check for discrepancies between the videos you find.
If you still don’t feel confident in the source, be wary of whether to trust the information presented in the video.
Look at video details
Although the new technology powering deepfakes is often quite sophisticated, there are often signals that something is off in these videos.
Here are some of the top things to look for when deciding whether a video is a deepfake:
- Low quality: Blurring, flickering, and glitching are common imperfections in deepfake videos.
- Eye movements: Notice whether the eyes are pointing in the right direction, darting around too much, or not blinking at a normal rate.
- Mouth movements: See if the audio lines up correctly with the speaker’s mouth.
- Audio issues: Listen to hear if the voice, tone, or wording sounds strange.
- Lighting problems: Consider whether the lighting and shadows look realistic in the video.
If you are watching a video, try slowing it down or pausing in certain spots to help detect these flaws.
If you suspect a still image might be a deepfake, do a reverse image search to see if there’s an original that’s been manipulated.
How can you protect your business against deepfakes?
If you’re concerned about your business falling victim to a deepfake attack, try using these strategies for added protection.
Train employees in deepfake audio detection
One key concern for businesses is fraudsters using impersonation over the phone to deceive company employees.
In one example, a British energy firm lost $243,000 when a deepfake voice was used to trick the CEO into believing that the head of the parent company was requesting an emergency transfer of funds.
Similar tactics could be used as part of a phishing scam in which employees could be convinced to give over secure information, like passwords or client information.
Training can be used to alert employees to this potential threat and prepare them with questions to ask to verify a caller’s identity.
Make teleconference calls private
In order to carry out the type of deepfake audio scam described above, scammers need access to a significant amount of existing audio from an existing company leader.
Therefore, it can help to minimize the availability of this content online.
Make sure all video calls and webinars are exclusively available to trusted individuals with security measures like password protection.
This can’t avoid all risk if other images or videos of company leaders are available online, but it will limit the amount of data available to scammers, which makes it harder to produce a convincing deepfake.
Use blockchain technology
Blockchain technology is another exciting advancement in preserving personal privacy online.
Users of blockchains often note that the technology can help their online data be less susceptible to hackers.
But enhanced security on blockchains can also help contribute to deepfake detection.
For instance, users could be asked for identity authentication before gaining access to data or funds.
Additionally, the traceable keys linked with each block on a blockchain could help determine which files are originals and which have been manipulated.
Blockchain files with authentic videos or images could even be digitally signed to help prove their validity.
These types of security protocols could help internet users avoid deepfake scams. They’re already being used by some major corporations, start-ups, and newsrooms as a way to distribute authentic media and digital information.