When several life-like Tom Cruise deepfakes went viral on TikTok, many saw the future of truth through a glass, darkly -- out of concern for a world where acquiring deepfakes of major celebrities or political figures would become a "one-click" feature of daily life. Like it or not, we live in a world where anyone can interact with deepfake technology. But curating high-end specialized AI drivers -- whether for mischief or raising awareness -- is harder than it looks. The creator of the video -- a Belgium VFX specialist named Chris Ume -- thinks this is unlikely, emphasizing the impractically long timespans and substantial effort required to build every deepfake, in addition to finding an ace Tom Cruise impersonator (Miles Fisher). "You can't do it by just pressing a button," said Ume in a report from The Verge.
Deepfakes, or face-swap videos, are video or images that use machine learning to create and manipulate visuals of people or events. The most famous example is the celebrity deepfake videos which are so realistic that viewers can't tell them apart from the real thing. Deepfake is a still relatively new technology that can create highly convincing videos of people saying or doing things they never did. This has many potential uses, from the creation of realistic celebrity videos to fake news. However, it is still very early in the life of this technology and a lot of people are worried about how it can be used for evil.
The furor around deepfakes, porn videos that use machine learning to convincingly edit celebrities into sex scenes, has largely died down since many hosting sites banned the clips months ago. But deepfakes are still out there, even on sites where they're not technically allowed. Popular streaming site PornHub, which classifies deepfakes as nonconsensual and theoretically doesn't permit them, still hosts dozens of the videos. BuzzFeed's Charlie Warzel wrote on Wednesday that he'd found more than 100 deepfake videos on PornHub, and they weren't particularly well-hidden. Searches like "deepfake" and "fake deeps" brought up dozens of clips.
Watch -- very closely -- as an ambitious group of A.I. engineers and machine-learning specialists try to mimic reality with such accuracy that you may not be able to tell what's real from what's not. If successful, they'll have created the ultimate deepfake, an ultrarealistic video that makes people appear to say and do things they haven't. Experts warn it may only be a matter of time before someone creates a bogus video that's convincing enough to fool millions of people. Over several months, "The Weekly" embedded with a team of creative young engineers developing the perfect deepfake -- not to manipulate markets or game an election, but to warn the public about the dangers of technology meant to dupe them. The team picked one of the internet's most recognizable personalities, the comedian and podcaster Joe Rogan, who unwittingly provided the inspiration for the engineers' deepfake moonshot.
The Cyberspace Administration of China (CAC) announced on Friday that it is making it illegal for fake news to be created with deepfake video and audio, according to Reuters. "Deepfakes" are video or audio content that have been manipulated using AI to make it look like someone said or did something they have never done. In its statement, the CAC said "With the adoption of new technologies, such as deepfake, in online video and audio industries, there have been risks in using such content to disrupt social order and violate people's interests, creating political risks and bringing a negative impact to national security and social stability," according to the South China Morning Post reporting on the new regulations. The CAC's regulations, which go into effect on January 1, 2020, require publishers of deepfake content to disclose that a piece of content is, indeed, a deepfake. It also requires content providers to detect deepfake content themselves, according to the South China Morning Post.