One video shows Barack Obama using an obscenity to refer to U.S. President Donald Trump. Another features a different former president, Richard Nixon, performing a comedy routine. But neither video is real: The first was created by filmmaker Jordan Peele, the second by Jigsaw, a technology incubator within Alphabet, Inc. Both are examples of deepfakes, videos or audios that use artificial intelligence to make someone appear to do or say something they didn't. The technology is a few years old and getting better.
Already stars like Gal Gadot and Cara Delevingne are victims of nonconsensual face swapping. The Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA) recently made a bold statement against deepfakes, a technique that uses artificial intelligence to digitally impose an actors' likenesses into a film without permission. As reported by Deadline Hollywood, the president of SAG-AFTRA, Gabrielle Carteris, wrote in the union's monthly magazine that it has "undertaken an exhaustive review of our collective bargaining options and legislative options to combat any and all uses of digital re-creations, not limited to deepfakes, that defame our members and inhibit their ability to protect their images, voices and performances from misappropriation." Carteris is specific about these unauthorized uses, using examples of "in advertisements, products, merchandise, company branding, fake news, movies, video games, or pornography." The use of deepfake technology to create non consensual erotic content is considered particularly shameful.
A perfect storm arising from the world of pornography may threaten the U.S. elections in 2020 with disruptive political scandals having nothing to do with actual affairs. Instead, face-swapping "deepfake" technology that first became popular on porn websites could eventually generate convincing fake videos of politicians saying or doing things that never happened in real life--a scenario that could sow widespread chaos if such videos are not flagged and debunked in time. The thankless task of debunking fake images and videos online has generally fallen upon news reporters, fact-checking websites and some sharp-eyed good Samaritans. But the more recent rise of AI-driven deepfakes that can turn Hollywood celebrities and politicians into digital puppets may require additional fact-checking help from AI-driven detection technologies. An Amsterdam-based startup called Deeptrace Labs aims to become one of the go-to shops for such deepfake detection technologies.
San Francisco (CNN)Deepfake videos are quickly becoming a problem, but there has been much debate about just how big the problem really is. One company is now trying to put a number on it. There are at least 14,678 deepfake videos -- and counting -- on the internet, according to a recent tally by a startup that builds technology to spot this kind of AI-manipulated content. And nearly all of them are porn. The number of deepfake videos is 84% higher than it was last December when Amsterdam-based Deeptrace found 7,964 deepfake videos during its first online count.