As is the case with many technological developments, 'deepfakes' -- videos in which someone who did not originally appear in the clip is rendered into it using artificial intelligence (AI) -- largely started in the world of pornography. Viewers, should they so desire, can now watch videos of their favourite musicians and film stars "in action," although that celebrity was never in that video. In these cases, increasingly sophisticated tools are used to put the musicians and film stars' faces onto pre-existing pornographic videos. There can obviously be a sinister, non-celebrity side to this too. The recent Sam Bourne novel, To Kill The Truth, features a protagonist Maggie Costello who appears in such a video as part of a cruel plot to undermine her.
One video shows Barack Obama using an obscenity to refer to U.S. President Donald Trump. Another features a different former president, Richard Nixon, performing a comedy routine. But neither video is real: The first was created by filmmaker Jordan Peele, the second by Jigsaw, a technology incubator within Alphabet, Inc. Both are examples of deepfakes, videos or audios that use artificial intelligence to make someone appear to do or say something they didn't. The technology is a few years old and getting better.
In November 2017, a Reddit account called deepfakes posted pornographic clips made with software that pasted the faces of Hollywood actresses over those of the real performers. Nearly two years later, deepfake is a generic noun for video manipulated or fabricated with artificial intelligence software. The technique has drawn laughs on YouTube, along with concern from lawmakers fearful of political disinformation. Yet a new report that tracked the deepfakes circulating online finds they mostly remain true to their salacious roots. Startup Deeptrace took a kind of deepfake census during June and July to inform its work on detection tools it hopes to sell to news organizations and online platforms.
San Francisco (CNN)Deepfake videos are quickly becoming a problem, but there has been much debate about just how big the problem really is. One company is now trying to put a number on it. There are at least 14,678 deepfake videos -- and counting -- on the internet, according to a recent tally by a startup that builds technology to spot this kind of AI-manipulated content. And nearly all of them are porn. The number of deepfake videos is 84% higher than it was last December when Amsterdam-based Deeptrace found 7,964 deepfake videos during its first online count.
You might not be aware of it, but there's a quiet arms race going on over our collective reality. The fight is between those who want to subvert it and usher in a world where we no longer believe what we see on our screens and those who want to help preserve the status quo. Up until this point in time, we have largely trusted our eyes and ears when consuming audio and visual media content, but new technological systems that create something known as deepfakes, are changing that. And as these deepfake videos nudge into the mainstream, experts are increasingly worried about the ramifications it will have on the information sharing that underpins society. Dr Richard Nock is the head of machine learning at CSIRO's Data 61 and understands the daunting potential of the technology that powers deepfake videos.