Deepfakes are video manipulations that can make people say seemingly strange things. Barack Obama and Nicolas Cage have been featured in these videos. It used to take a lot of time and expertise to realistically falsify videos. For decades, authentic-looking video renderings were only seen in big-budget sci-fi movies films like "Star Wars." However, thanks to the rise in artificial intelligence, doctoring footage has become more accessible than ever, which researchers say poses a threat to national security.
Plenty of people are following the final days of the midterm election campaigns. Yale law researcher Rebecca Crootof has a special interest--a small wager. If she wins, victory will be bitter sweet, like the Manhattan cocktail that will be her prize. In June, Crootof bet that before 2018 is out an electoral campaign somewhere in the world will be roiled by a deepfake--a video generated by machine learning software that shows someone doing or saying something that in fact they did not do or say. Under the terms of the bet, the video must receive more than 2 million views before being debunked.
A perfect storm arising from the world of pornography may threaten the U.S. elections in 2020 with disruptive political scandals having nothing to do with actual affairs. Instead, face-swapping "deepfake" technology that first became popular on porn websites could eventually generate convincing fake videos of politicians saying or doing things that never happened in real life--a scenario that could sow widespread chaos if such videos are not flagged and debunked in time. The thankless task of debunking fake images and videos online has generally fallen upon news reporters, fact-checking websites and some sharp-eyed good Samaritans. But the more recent rise of AI-driven deepfakes that can turn Hollywood celebrities and politicians into digital puppets may require additional fact-checking help from AI-driven detection technologies. An Amsterdam-based startup called Deeptrace Labs aims to become one of the go-to shops for such deepfake detection technologies.
Last week, Mona Lisa smiled. A big, wide smile, followed by what appeared to be a laugh and the silent mouthing of words that could only be an answer to the mystery that had beguiled her viewers for centuries. A great many people were unnerved. Mona's "living portrait," along with likenesses of Marilyn Monroe, Salvador Dali, and others, demonstrated the latest technology in deepfakes--seemingly realistic video or audio generated using machine learning. Developed by researchers at Samsung's AI lab in Moscow, the portraits display a new method to create credible videos from a single image.
The ability to digitally insert actors into films has been available since the mid-1990s, when it was first used to finish The Crow after the tragic on-set death of lead actor Brandon Lee. Techniques to do this in a very realistic and natural-looking way have been available for years now, so why is there such a panic brewing over deepfakes? Before deepfakes, this sort of thing required expensive CGI software and highly specialized knowledge that is limited to a relative handful of digital effects studios. Deepfakes employ artificial intelligence and allow anyone with a decent computer to make their own realistic fake videos starring just about anyone in the world, working only from a set of images or videos of their target. Deepfakes are a relatively new phenomenon, first starting to emerge on the internet in late 2017.