In November 2017, a Reddit account called deepfakes posted pornographic clips made with software that pasted the faces of Hollywood actresses over those of the real performers. Nearly two years later, deepfake is a generic noun for video manipulated or fabricated with artificial intelligence software. The technique has drawn laughs on YouTube, along with concern from lawmakers fearful of political disinformation. Yet a new report that tracked the deepfakes circulating online finds they mostly remain true to their salacious roots. Startup Deeptrace took a kind of deepfake census during June and July to inform its work on detection tools it hopes to sell to news organizations and online platforms.
San Francisco (CNN)Deepfake videos are quickly becoming a problem, but there has been much debate about just how big the problem really is. One company is now trying to put a number on it. There are at least 14,678 deepfake videos -- and counting -- on the internet, according to a recent tally by a startup that builds technology to spot this kind of AI-manipulated content. And nearly all of them are porn. The number of deepfake videos is 84% higher than it was last December when Amsterdam-based Deeptrace found 7,964 deepfake videos during its first online count.
You might not be aware of it, but there's a quiet arms race going on over our collective reality. The fight is between those who want to subvert it and usher in a world where we no longer believe what we see on our screens and those who want to help preserve the status quo. Up until this point in time, we have largely trusted our eyes and ears when consuming audio and visual media content, but new technological systems that create something known as deepfakes, are changing that. And as these deepfake videos nudge into the mainstream, experts are increasingly worried about the ramifications it will have on the information sharing that underpins society. Dr Richard Nock is the head of machine learning at CSIRO's Data 61 and understands the daunting potential of the technology that powers deepfake videos.
A team of researchers from the University of Albany have developed a method of combating Deepfake videos, using machine learning techniques to search videos for digital "fingerprints" left behind when a video has been altered. One of the biggest concerns in the tech world over the past couple of years has been the rise of Deepfakes. Deepfakes are a type of fake video constructed by artificial intelligence algorithms run through deep neural networks, and the products of the deepfake technology are shockingly good – sometimes difficult to tell apart from a real, genuine video. AI researchers, ethicists, and political scientists are worried that the Deepfake technology will eventually be used to impact political elections, disseminating misinformation in a form more convincing than a fake news story. In order to provide some defense against the manipulation and misinformation that Deepfakes can cause, researchers from the University of Albany have created tools to assist in the detection of fake videos.
How do you defeat "deepfakes"? According to Google, you develop more of them. Google just released a large, free database of deepfake videos to help research develop detection tools. Google collaborated with "Jigsaw", a tech "incubator" founded by Google, and the FaceForesenics Benchmark Program at the Technical University of Munich and the University Federico II of Naples. They worked with several paid actors to create hundreds of real videos and then used popular deepfake technologies to generate thousands of fake videos.