What Are Deepfakes? How to Spot Fake AI Audio and Video

News

HomeHome / News / What Are Deepfakes? How to Spot Fake AI Audio and Video

Aug 31, 2023

What Are Deepfakes? How to Spot Fake AI Audio and Video

Computers have been getting increasingly better at simulating reality. Media generated by artificial intelligence (AI) has been making serious headlines — especially videos designed to mimic someone,

Computers have been getting increasingly better at simulating reality. Media generated by artificial intelligence (AI) has been making serious headlines — especially videos designed to mimic someone, making it appear as though they're saying or doing something they aren't.

A Twitch streamer was caught on a website known for making AI-generated pornography of his peers. A group of New York students made a video of their principal saying racist remarks and threatening students. In Venezuela, generated videos are being used to disseminate political propaganda.

In all three cases, AI-generated video was made with the goal of convincing you that someone did something they never actually did. There's a word for this kind of content: Deepfakes.

Deepfakes use AI to generate completely new video or audio, with the end goal of portraying something that didn't actually occur in reality.

The term "deepfake" comes from the underlying technology — deep learning algorithms — which teach themselves to solve problems with large sets of data and can be used to create fake content of real people.

"A deepfake would be footage that is generated by a computer that has been trained through countless existing images," said Cristina López, a senior analyst at Graphika, a firm that researches the flow of information across digital networks.

Deepfakes aren't just any fake or misleading images. The AI-generated pope in a puffer jacket, or the fake scenes of Donald Trump being arrested that circulated shortly before his indictment, are AI-generated, but they're not deepfakes. (Images like these, when combined with misleading information, are commonly referred to as "shallowfakes.") What separates a deepfake is the element of human input.

When it comes to deepfakes, the user only gets to decide at the very end of the generation process if what was created is what they want or not; outside of tailoring training data and saying "yes" or "no" to what the computer generates after the fact, they don't have any say in how the computer chooses to make it.

Note: Computer-assisted technologies like Photoshop and CGI are commonly used to create media, but the difference is that humans are involved in every step of the process, barring recent development's like Adobe's generative Firefly tool. "You have a lot of AI assistance with CGI, but at the end of the day there is a human with a human viewpoint controlling what the output is going to be," López said.

There are several methods for creating deepfakes, but the most common relies on the use of deep neural networks that employ a face-swapping technique. You first need a target video to use as the basis of the deepfake and then a collection of video clips of the person you want to insert in the target.

The videos can be completely unrelated; the target might be a clip from a Hollywood movie, for example, and the videos of the person you want to insert in the film might be random clips downloaded from YouTube.

The program guesses what a person looks like from multiple angles and conditions, then maps that person onto the other person in the target video by finding common features.

Another type of machine learning is added to the mix, known as Generative Adversarial Networks (GANs), which detects and improves any flaws in the deepfake within multiple rounds, making it harder for deepfake detectors to decode them.

Though the process is complex, the software is rather accessible. Several apps make generating deepfakes easy even for beginners — such as the Chinese app Zao, DeepFace Lab, FakeApp, and Face Swap — and a large amount of deepfake softwares can be found on GitHub, an open source development community.

Deepfake technology has historically been used for illicit purposes, including to generate non-consensual pornography. The FBI released a public service announcement in June 2023 warning the public about the dangers of generative AI, and how it's used for "Explicit Content Creation," "Sextortion," and "Harassment."

In 2017, a reddit user named "deepfakes'' created a forum for porn that featured face-swapped actors. Since that time, porn (particularly revenge porn) has repeatedly made the news, severely damaging the reputation of celebrities and prominent figures. According to a Deeptrace report, pornography made up 96% of deepfake videos found online in 2019.

Deepfakes have also been used for non-sexual criminal activity, including one instance in 2023 that involved the use of deepfake technology to mimic the voice of a woman's child to threaten and extort her.

Deepfake video has also been used in politics. In 2018, for example, a Belgian political party released a video of Donald Trump giving a speech calling on Belgium to withdraw from the Paris climate agreement. Trump never gave that speech, however – it was a deepfake. That was not the first use of a deepfake to create misleading videos, and tech-savvy political experts are bracing for a future wave of fake news that features convincingly realistic deepfakes.

But journalists, human rights groups, and media technologists have also found positive uses for the technology. For instance, the 2020 HBO documentary "Welcome to Chechnya" used deepfake technology to hide the identities of Russian LGBTQ refugees whose lives were at risk while also telling their stories.

WITNESS, an organization focused on the use of media to defend human rights, has expressed optimism around the technology when used in this way, while also recognizing digital threats.

"Part of our work is really exploring the positive use of that technology, from protecting people like activists on video, to taking advocacy approaches, to doing political satire," said shirin anlen, a media technologist for WITNESS.

For anlen and WITNESS, the technology isn't something to be entirely feared. Instead, it should be seen as a tool. "It's building on top of a long term relationship we have had with audiovisuals. We've already been manipulating audio. We've already been manipulating visuals in different ways," anlen said.

Experts like anlen and López believe that the best approach the public can take to deepfakes is not to panic, but to be informed about the technology and its capabilities.

There are a handful of indicators that give away deepfakes:

As technology improves, the discrepancies between real and fake content will likely become harder to detect. For that reason, experts like anlen believe that the burden shouldn't be on individuals to detect deepfakes out in the wild.

"The responsibility should be on the developers, on the toolmakers, on the tech companies to develop invisible watermarks and signal what the source of that image is," anlen said. And a number of startup companies are developing methods for spotting deepfakes.

Sensity, for example, has developed a detection platform that's akin to an antivirus for deepfakes that alerts users via email when they're watching something that bears telltale fingerprints of AI-generated media. Sensity uses the same deep learning processes used to create fake videos.

Operation Minerva takes a more straightforward approach to detecting deepfakes. This company's algorithm compares potential deepfakes to known video that has already been "digitally fingerprinted." For example, it can detect examples of revenge porn by recognizing that the deepfake video is simply a modified version of an existing video that Operation Minerva has already cataloged.

Despite these advances, Nasir Memon, a professor of computer science and engineering at NYU, said there haven't been any efforts to combat deepfakes at scale, and that any solution that comes won't be a cure-all for stopping harmful deepfakes from spreading.

"I think the solution overall is not technology based, but instead it's education, awareness, the right business models, incentives, policies, laws," Memon said.

Note: Several states have passed or attempted to pass legislation outlawing the use of deepfakes in specific cases like for pornography or politics, including California, New York, and Virginia.

A growing problem is the use of deepfakes in a live setting to mask one's identity in the moment, like over a phone call or Zoom meeting. According to Memon, the threat of someone using a fake identity could arise in any number of situations, from job interviews to remote college exams to visa applications. Even Insider reporters have had to deal with AI-generated scams reaching out as sources.

"The problem with detection is that the burden is on the defender," Memon said. "I have to analyze every single image in every possible way. But in security, you want to do it the opposite way." Ideally, technology would be developed to detect these types of live deepfakes, too.

Still, Memon doesn't expect this kind of approach to be the end of the deepfake question.

"Nothing is going to completely solve the problem. Authenticity always has to be tested in society," he said. "Don't jump to conclusions when you see an image now. Look at the source. Wait until you have corroborating evidence from reliable sources."

We may receive a commission when you buy through our links, but our reporting and recommendations are always independent and objective.

What is a deepfake?Note:How are deepfakes used?How to detect a deepfakeDo details seem blurry or obscure? Does the lighting look unnatural?Do the words or sounds not match up with the visuals? Does the source seem reliable?Combatting deepfakes with technologyNote: