It wouldn’t be completely out of character for Joe Rogan, the comedian turned podcaster, to endorse a “libido-boosting” coffee brand for men. But when a video circulating on TikTok recently showed Rogan and his guest, Andrew Huberman, hawking the coffee, some eagle-eyed viewers were shocked — including Dr. Huberman. “Yep that’s fake,” Dr. Huberman wrote on Twitter after seeing the ad, in which he appears to praise the coffee’s testosterone-boosting potential, even though he never did.
The ad was one of a growing number of fake videos on social media made with technology powered by artificial intelligence. Experts said Rogan’s voice appeared to have been synthesised using A.I. tools that mimic celebrity voices. Dr. Huberman’s comments were ripped from an unrelated interview. Making realistic fake videos, often called deepfakes, once required elaborate software to put one person’s face onto another’s. But now, many of the tools to create them are available to everyday consumers — even on smartphone apps, and often for little to no money.
The new altered videos — mostly, so far, the work of meme-makers and marketers — have gone viral on social media sites like TikTok and Twitter. The content they produce, sometimes called cheapfakes by researchers, work by cloning celebrity voices, altering mouth movements to match alternative audio and writing persuasive dialogue.
The videos, and the accessible technology behind them, have some A.I. researchers fretting about their dangers, and have raised fresh concerns over whether social media companies are prepared to moderate the growing digital fakery. Disinformation watchdogs are also steeling themselves for a wave of digital fakes that could deceive viewers or make it harder to know what is true or false online.
“What’s different is that everybody can do it now,” said Britt Paris, an assistant professor of library and information science at Rutgers University who helped coin the term “cheapfakes.” “It’s not just people with sophisticated computational technology and fairly sophisticated computational know-how. Instead, it’s a free app.”
Reams of manipulated content have circulated on TikTok and elsewhere for years, typically using more homespun tricks like careful editing or the swapping of one audio clip for another. In one video on TikTok, Vice President Kamala Harris appeared to say everyone hospitalized for Covid-19 was vaccinated. In fact, she said the patients were unvaccinated.
Graphika, a research firm that studies disinformation, spotted deepfakes of fictional news anchors that pro-China bot accounts distributed late last year, in the first known example of the technology’s being used for state-aligned influence campaigns. But several new tools offer similar technology to everyday internet users, giving comedians and partisans the chance to make their own convincing spoofs.
Last month, a fake video circulated showing President Biden declaring a national draft for the war between Russia and Ukraine. The video was produced by the team behind “Human Events Daily,” a podcast and livestream run by Jack Posobiec, a right-wing influencer known for spreading conspiracy theories.
Visit news.dtnext.in to explore our interactive epaper!
Download the DT Next app for more exciting features!
Click here for iOS
Click here for Android