Deepfakes, or face-swap videos, are video or images that use machine learning to create and manipulate visuals of people or events. The most famous example is the celebrity deepfake videos which are so realistic that viewers can't tell them apart from the real thing. Deepfake is a still relatively new technology that can create highly convincing videos of people saying or doing things they never did. This has many potential uses, from the creation of realistic celebrity videos to fake news. However, it is still very early in the life of this technology and a lot of people are worried about how it can be used for evil.
The furor around deepfakes, porn videos that use machine learning to convincingly edit celebrities into sex scenes, has largely died down since many hosting sites banned the clips months ago. But deepfakes are still out there, even on sites where they're not technically allowed. Popular streaming site PornHub, which classifies deepfakes as nonconsensual and theoretically doesn't permit them, still hosts dozens of the videos. BuzzFeed's Charlie Warzel wrote on Wednesday that he'd found more than 100 deepfake videos on PornHub, and they weren't particularly well-hidden. Searches like "deepfake" and "fake deeps" brought up dozens of clips.
A team of researchers from the State University of New York (SUNY) recently developed a method for detecting whether the people in a video are AI-generated. It looks like DeepFakes could meet its match. What it means: Fear over whether computers will soon be able to generate videos that are indistinguishable from real footage may be much ado about nothing, at least with the currently available methods. The SUNY team observed that the training method for creating AI that makes fake videos involves feeding it images – not video. This means that certain human physiological quirks – like breathing and blinking – don't show up in computer-generated videos.
How do you defeat "deepfakes"? According to Google, you develop more of them. Google just released a large, free database of deepfake videos to help research develop detection tools. Google collaborated with "Jigsaw", a tech "incubator" founded by Google, and the FaceForesenics Benchmark Program at the Technical University of Munich and the University Federico II of Naples. They worked with several paid actors to create hundreds of real videos and then used popular deepfake technologies to generate thousands of fake videos.
When several life-like Tom Cruise deepfakes went viral on TikTok, many saw the future of truth through a glass, darkly -- out of concern for a world where acquiring deepfakes of major celebrities or political figures would become a "one-click" feature of daily life. Like it or not, we live in a world where anyone can interact with deepfake technology. But curating high-end specialized AI drivers -- whether for mischief or raising awareness -- is harder than it looks. The creator of the video -- a Belgium VFX specialist named Chris Ume -- thinks this is unlikely, emphasizing the impractically long timespans and substantial effort required to build every deepfake, in addition to finding an ace Tom Cruise impersonator (Miles Fisher). "You can't do it by just pressing a button," said Ume in a report from The Verge.