Stay marketing and tech-savvy. Get the latest in martech - subscribe to MarTech Today.
Fake videos could threaten what is left of reality
New, cheap tools to create realistic-looking fake videos are already here, and brand reputation, news reporting and political campaigns may never be the same.
If you value facts, democracy or markets geared around supply-and-demand, you may want to sit down.
Relatively inexpensive tools have recently emerged that are able to generate fake, but realistic-looking, videos. The ability to generate authentic-looking fakeness is now no longer restricted to high-end Hollywood special effects shops.
This poses a previously unknown level of risk to how democracy or markets are supposed to work, Michael Fauscette told me. He’s the chief research officer at business software review platform G2 Crowd.
One of the tools was recently publicized on Reddit and made available on GitHub, he noted.
“I gave it to someone,” he said. Within 24 hours, there was a realistic-looking video of someone walking around. Except the person shown was actually a composite of one person’s head added to another person’s body.
The current level of tools, Fauscette said, can swap heads or other body parts, generate lip movements that seem to correspond to the words being said, and even generate fake audio, where someone’s voice says things they never said.
This video shows research that generates authentic-looking video of a talking person, saying words they said in another video:
Fauscette noted that, already, the videos generated with these tools “are good enough to fool a general audience.”
They can be detected by experts using image forensic tools to look for slight vibrations or variations along edges that might detect a head that was not originally related to a body. Or audio tools might be used to dissect words spoken and look for unnatural intonations or phrasing.
And machine learning can be employed to look for clues that humans can’t detect, the way they can find patterns in piles of data that are otherwise imperceptible. Show the machine learning platform many examples of fake videos, and it might be able to detect them on its own.
But, Fauscette pointed, we’re talking about tools that have just become available over the last year. They will continue to improve, in an arms race against detection methods.
It’s inevitable, he said, that extremely capable tools will become widely available, the way tools are now widely available for generating fake news or creating sophisticated software viruses.
The first use in business, Fauscette predicted, could well be “a new form of ransomware.” Perhaps the fake video will realistically present a compromising — and fake — video of a CEO, with payment demanded to prevent release.
Or a brand’s reputation could be at risk, such as a video showing a factory using child workers or committing a major environmental crime.
The sudden release of these and countless other variations could cause the immediate crash of a stock price, or the destruction of a brand’s reputation that could take years to heal.
And that’s the bright side.
The worse news: Terrorists could release a video of Russian President Putin ordering the launch of nuclear missiles.
Would the American military be able to immediately determine if the video was real or not, when millions of lives depended on an immediate reaction?
“I’m not a doom and gloom kind of guy,” Fauscette told me, but there’s a “huge risk about this.”
He noted that we may have to approach all videos as suspect unless authenticated, guilty before innocent, the way skeptical observers first approach UFO videos. If you hear a live broadcast on the radio of aliens attacking New Jersey, as in Orson Welles’ panic-inducing version of “War of the Worlds” in 1938, stay calm and check other sources.
If so, that kind of instinctive doubt needs to start now.
“The black hats are ahead of the white hats [at this point],” Fauscette warned.