Deepfakes, or face-swap videos, are video or images that use machine learning to create and manipulate visuals of people or events. The most famous example is the celebrity deepfake videos which are so realistic that viewers can't tell them apart from the real thing. Deepfake is a still relatively new technology that can create highly convincing videos of people saying or doing things they never did. This has many potential uses, from the creation of realistic celebrity videos to fake news. However, it is still very early in the life of this technology and a lot of people are worried about how it can be used for evil.
When several life-like Tom Cruise deepfakes went viral on TikTok, many saw the future of truth through a glass, darkly -- out of concern for a world where acquiring deepfakes of major celebrities or political figures would become a "one-click" feature of daily life. Like it or not, we live in a world where anyone can interact with deepfake technology. But curating high-end specialized AI drivers -- whether for mischief or raising awareness -- is harder than it looks. The creator of the video -- a Belgium VFX specialist named Chris Ume -- thinks this is unlikely, emphasizing the impractically long timespans and substantial effort required to build every deepfake, in addition to finding an ace Tom Cruise impersonator (Miles Fisher). "You can't do it by just pressing a button," said Ume in a report from The Verge.
How do you defeat "deepfakes"? According to Google, you develop more of them. Google just released a large, free database of deepfake videos to help research develop detection tools. Google collaborated with "Jigsaw", a tech "incubator" founded by Google, and the FaceForesenics Benchmark Program at the Technical University of Munich and the University Federico II of Naples. They worked with several paid actors to create hundreds of real videos and then used popular deepfake technologies to generate thousands of fake videos.
Watch -- very closely -- as an ambitious group of A.I. engineers and machine-learning specialists try to mimic reality with such accuracy that you may not be able to tell what's real from what's not. If successful, they'll have created the ultimate deepfake, an ultrarealistic video that makes people appear to say and do things they haven't. Experts warn it may only be a matter of time before someone creates a bogus video that's convincing enough to fool millions of people. Over several months, "The Weekly" embedded with a team of creative young engineers developing the perfect deepfake -- not to manipulate markets or game an election, but to warn the public about the dangers of technology meant to dupe them. The team picked one of the internet's most recognizable personalities, the comedian and podcaster Joe Rogan, who unwittingly provided the inspiration for the engineers' deepfake moonshot.
In an effort to boost research and development in the context of deepfake detection, Google has shared a database containing 3,000 deepfake videos with the new FaceForensics benchmark, a research project by researchers at the Technical University of Munich and the University Federico II of Naples. Deepfakes are audio or visual content doctored by artificial intelligence (AI). They are considered a major threat because they allow threat actors to spread disinformation and influence public opinion by making it seem like influential individuals including government, corporate and military leaders, candidates in democratic elections, scientists and celebrities, said or did things they didn't actually say or do. The videos released by Google were produced with the help of paid actors. After recording hundreds of videos, Google researchers used publicly available tools to create thousands of deepfake videos.