You’ve heard about ‘fake news,’ but what about ‘deepfakes’?

Sophisticated video technology threatens the public’s trust in what they see

Created date

April 29th, 2019
An old-style television features a swirling black and white hypnosis pattern, against a wall with the same pattern.

Don't trust everything you see on video! A new technology allows video creators to literally put words in the mouths of celebrities, politicians, and regular people alike.

Not a week goes by without someone calling a report or headline they object to “fake news.” Defined as the willful spreading of false or malicious information in order to damage a person or institution, fake news is a centuries-old tactic. 

Octavian, the first emperor of Rome, oversaw a fake news campaign against his rival Mark Antony in the first century BC, calling Antony a drunk, a womanizer, and a puppet of Egyptian Queen Cleopatra.

Centuries later, Benjamin Franklin published fake news articles about “murderous scalping Indians” working with King George III to rally colonist support for the American Revolution.

What may be the most famous fake news of all time occurred on Oct. 30, 1938, when Orson Welles’ radio drama War of the Worlds, a fictional account of a Martian attack, was mistaken for an actual news broadcast. Though preceded by a disclaimer stating that what followed was fictional, it nevertheless caused panic and hysteria for those who tuned in late.

With the invention of the Internet and rise of social media, disseminating fake news is now faster, easier, and more far-reaching than ever before.


Fake news played a role in the 2016 and 2018 U.S. elections. Looking toward the upcoming 2020 election, experts warn about a new version of fake news, one that is far more advanced and difficult to identify. Called “deepfake,” it relies on sophisticated technology that is readily available to anyone with an Internet connection.

Using artificial intelligence and facial mapping software, it’s now relatively cheap and easy to insert someone’s face and voice into video footage to make it appear that person is doing and/or saying something he or she never actually said or did.

This effect has been used on countless late-night comedy programs and silly YouTube posts, but in the last few years, this technology has become so advanced, it’s nearly impossible to spot the fake.

Sen. Marco Rubio (R-Fla.) expressed concern about deepfakes at a Senate Intelligence Committee Hearing, saying, “This…technology is pretty widely available on the Internet, and people have used it already for all sorts of nefarious purposes at the individual level. I think you can only imagine what a nation-state could do with that technology, particularly to our politics.”

Obscene posts

Deepfakes first surfaced on the Internet in 2017 when pornographic images and videos of famous actresses like Scarlett Johansson and Gal Godot were posted on Redditt, Twitter, and other social media platforms. Upon closer examination, it was discovered that images of the actresses had been digitally manipulated and superimposed into video footage of other people. The technology was so realistic, that the “special effect” went unnoticed by the untrained eye.

In a statement to The Washington Post, Johansson called the Internet a “vast wormhole of darkness that eats itself” saying she did not try to remove the deepfake videos of her because such an effort would be “a lost cause.”

Fake pornographic images of a movie star are despicable and may have negative career consequences, but when it comes to deepfakes, there’s a far greater threat.

Liar’s dividend

At that same hearing, Rubio outlined several terrifying scenarios. “If we could imagine for a moment, a foreign intelligence agency could use deepfakes to produce a fake video of an American politician using a racial epithet or taking a bribe or anything of that nature….And imagine a compelling video like this produced on the eve of an election or a few days before a major public policy decision.

“I believe that this is the next wave of attacks against America and Western democracies,” Rubio continued. “The ability to produce fake videos that can only be determined to be fake after extensive analytical analysis, and by then the election is over and millions of Americans have seen an image they want to believe anyway because of their preconceived bias against that individual.”

Another member of the Senate Intelligence Committee, Mark Warner (D-Va.) has proposed amending the Communications Decency Act. “Currently, the onus is on victims to exhaustively search for and report this content to platforms—[that] frequently take months to respond and are under no obligation thereafter to proactively prevent the same content from being re-uploaded in the future,” Warner wrote in his proposal.

He goes on to suggest making social media platforms responsible for the content on their sites, and fining them if malicious content isn’t removed promptly. Defining malicious content, however, is problematic.

Remember those comedy shows that use similar deepfake technology to poke fun at the day’s headlines—would they be subject to this law?

There’s little doubt that people will believe videos they view as authentic until they have been duped too many times. Then skepticism will take over creating another frightening situation.

U.S. law professors Robert Chesney and Danielle Citron call it “the liar’s dividend”—when the public has been tricked and deceived by enough deepfakes, they may start to disbelieve all videos—real and fake.

No one has proposed what to do when that happens.