Slick Tom Cruise Deepfakes Signal That Near Flawless Forgeries May Be Here

Mar 11, 2021
Originally published on March 12, 2021 3:54 pm

In a crop of viral videos featuring Tom Cruise, it's not the actor's magic trick nor his joke-telling that's deceptive — but the fact that it's not actually Tom Cruise at all.

The videos, uploaded to TikTok in recent weeks by the account @deeptomcruise, have raised new fears over the proliferation of believable deepfakes — the nickname for media generated by artificial intelligence technology showing phony events that often seem realistic enough to dupe an audience.

Hany Farid, a professor at the University of California, Berkeley, told NPR's All Things Considered that the Cruise videos demonstrate a step up in the technology's evolving sophistication.

"This is clearly a new category of deepfake that we have not seen before," said Farid, who researches digital forensics and misinformation.

Deepfakes have been around for years, but, Farid says, the technology has been steadily advancing.

"Every three to four months a video hits Tik Tok, YouTube, whatever, and it's just — wow, this is much, much better than before," he said.

To the trained eye, Farid says a distortion of Cruise's pupils in the videos was a red flag for spotting the fakery.

The clues were much easier to spot in a glitchier 2018 deepfake video of an uncanny Barack Obama, later unmasked as comedian Jordan Peele impersonating the former president. The following year, a video of Nancy Pelosi, doctored to make the House speaker sound intoxicated through slowed-down speech, made for a more believable deepfake, if only technically.

In 2020, the warnings that deepfakes would be leveraged as a dominant disinformation tool during the presidential election cycle went largely unrealized. But cybersecurity experts say that was only because less sophisticated tactics, like lies, crude video edits and memes, have been working just fine as a source of deception.

Plus, deepfakes are time-consuming and require some technical prowess.

Chris Ume, a visual effects artist who created the Cruise deepfakes, told The Verge that each video clip was the product of weeks of work. He also relied on the talents of actor Miles Fisher, a Cruise lookalike, to impersonate the movie star before giving Fisher a digital face transplant.

Using open-source deepfake software, existing editing tools and his own visual effects expertise, Ume said, "I make sure you don't see any of the glitches."

Still, he told the website, it took a couple of months to train a machine learning algorithm by feeding a trove of Hollywood footage of Cruise through high-end graphics processors.

Ume credits Fisher for nailing Cruise's likeness, from the intense eye contact to his signature laugh. "He's a really talented actor," Ume told The Verge. "I just do the visual stuff."

The artist, who told CNET that his videos are strictly a creative pursuit, also wanted to bring awareness to the advancement of deepfakes.

Ume said he is less convinced we've arrived at an ominous point in which the technology can be readily abused.

"It's not like you're sitting at home and you can just click on a button and you can create the same thing we did," he told the tech publication.

While Ume's videos have been made with tongue very much in cheek, there are more nefarious cases in which deepfakes have been used, including nonconsensual deepfake pornography. A 2019 report from Sensity, a company that tracks visual threats, found that nonconsensual deepfake pornography accounted more than 90% of all deepfake material online.

UC Berkeley's Farid cautioned that it's not just the content that poses risks, but the ease of how quickly misinformation can travel online through social media. Deepfakes, he said, are "now throwing jet fuel onto that already burning fire."

He offered a hypothetical example: Say he created a deepfake video to show Amazon executive Jeff Bezos saying that the company's profits have taken a hit. If that video goes viral, he posited, "How long does it take me to move the market to the tune of billions of dollars?"

What's more, Farid said, AI tools that were once in the hands of academics are now widely available through apps and open-source code — as demonstrated with Ume's videos.

"Now you have the perfect storm," he said. "I can create this content easily, inexpensively and quickly, I can deliver it en masse to the world, and I have a very willing and eager public that will amplify that for me."

NPR's Vincent Acovino and Patrick Jarenwattananon produced and edited the audio version of this story.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

MARY LOUISE KELLY, HOST:

Perhaps you have seen the viral TikTok where Tom Cruise purportedly performs a magic trick with a coin.

(SOUNDBITE OF TIKTOK VIDEO)

MILES FISHER: (As Tom Cruise) I'm going to show you some magic. It's the real thing (laughter).

KELLY: That laugh - unmistakably Tom Cruise, right? And it sure looks like Tom Cruise when you watch the video. It is not, though. This video is a deepfake, an image altered with artificial intelligence in a way that makes it difficult - really difficult - to tell that it is not real. Well, here to tell us how it works is University of California, Berkeley professor Hany Farid. Welcome.

HANY FARID: It's good to be here.

KELLY: What did you make of this Tom Cruise deepfake? The voice, the mannerisms, they are perfect. Did it fool you?

FARID: It's exceedingly well done. And it's been - it was interesting to see because part of the evolution of what we've been seeing since 2017, where every three to four months a video hits TikTok, YouTube, whatever, and it's just, wow, this is much, much better than before. And this is clearly a new category of deepfake that we have not seen before.

KELLY: Just explain, what are we actually seeing? Is this real video of Cruise, but it's been manipulated? Is this an actor? What's happening?

FARID: What you're seeing in these videos is not Tom Cruise. It is an actor who looks a little bit like Tom Cruise, clearly sounds like Tom Cruise. But on every frame of the video, at somewhere between 24 and 30 frames per second, the actor's frame was replaced with Tom Cruise's face. And that process is done digitally and with advances in machine learning and big data. And almost certainly, there was some post-production in this to sort of clean it up and get it really polished and high quality. And if you can replace somebody's face on every frame of a video, you can make it look like it's Tom Cruise or you or me or anybody else.

KELLY: Is this legal? I was looking - the account in question here is @deeptomcruise. That's the account posting this. Does TikTok have an obligation to take that account down once it has been established that this isn't real, that this is a deepfake?

FARID: Man, that's a great question. So I'm not the lawyer to ask this question to, but there is a really interesting question here around identity. So for example, many states have passed laws banning nonconsensual pornography, where one person's likeness is inserted into sexually explicit material. And you can see clearly why you would do that. It is harmful to that individual.

This one's a little bit different. It's not clear that it's harmful to Tom Cruise. Now, he may say, look, this is a copyright infringement because you're using my face and my likeness, at which point TikTok or YouTube or whomever would be obligated to take down the material.

But I think we are starting to tread into some interesting legal and ethical territory is, who owns that identity? And if that person is a person in sort of the public sphere - a president, an actor - do they have different rights than, say, a private individual like me or you? I don't think we've fully figured out how we're going to navigate that space.

KELLY: Yeah. And when you talk about the dangers of this, I'm thinking there's such a range. There's, you know, you touched on pornography, nonconsensual pornography - you know, a woman's photo being linked to something that she is not doing. I also read where you have talked about the potential of deepfakes to pose a national security risk. How so?

FARID: So here's a couple of scenarios you can imagine. Somebody creates a video of President Biden saying, I've launched nuclear weapons against Iran, and that goes viral online. How long does it take before somebody panics and pushes the button in return? And that's the danger here, is first of all, it's not just the content, but it's that we can deliver it online en masse to millions of people around the world, have it go viral and before anybody gets around to figuring out that it's fake, we have a global nuclear meltdown.

Here's another scenario. I create a video of Jeff Bezos quietly saying that Amazon's profits are down 20%. That video goes viral. How long does it take me to move the market to the tune of billions of dollars?

Now, are either of those scenarios highly likely? No. But are they possible? Yes. And the consequences should give us pause because we know that things can spread online incredibly fast. And before anybody circles around to figuring out what's what, you can imagine some very, very bad consequences from that material.

And frankly, that is outside of the deepfake phenomena. Why we have the misinformation apocalypse that is upon us now is because it's so easy to spread misinformation, and people are so willing and eager to spread it. And deepfakes is now throwing jet fuel onto that already burning fire.

KELLY: Professor Hany Farid of the University of California, Berkeley. Professor Farid, thanks.

FARID: It's very good to be with you. Thanks for talking.

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.