Facebook has developed a model to tell when a video is using a deepfake – and can even tell which algorithm was used to create it. The term "deepfake" refers to a video where artificial intelligence and deep learning – an algorithmic learning method used to train computers – has been used to make a person appear to say something they have not. Notable examples of deepfakes include a manipulated video of Richard Nixon's Apollo 11 presidential address and Barack Obama insulting Donald Trump – and although they are relatively benign now, experts suggest that they could be the most dangerous crime of the future. Detecting a deepfake relies on telling whether an image is real or not, but the amount of information available to researchers to do so can be limited – relying on potential input-output pairs or rely on hardware information that might not be available in the real world. Facebook's new process relies in detecting the unique patterns behind an artificially-intelligent model that could generate a deepfake.
Encountering altered videos and photoshopped images is almost a rite of passage on the internet. It's rare these days that you'd visit social media and not come across some form of edited content -- whether that be a simple selfie with a filter, a highly embellished meme or a video edited to add a soundtrack or enhance certain elements. But while some forms of media are obviously edited, other alterations may be harder to spot. You may have heard the term "deepfake" in recent years -- it first came about in 2017 to describe videos and images that implement deep learning algorithms to create videos and images that look real. For example, take the moon disaster speech given by former president Richard Nixon when the Apollo 11 team crashed into the lunar surface.
The word deepfake combines the terms "deep learning" and "fake," and is a form of artificial intelligence. In simplistic terms, deepfakes are falsified videos made by means of deep learning, said Paul Barrett, adjunct professor of law at New York University. Deep learning is "a subset of AI," and refers to arrangements of algorithms that can learn and make intelligent decisions on their own. More specifically deepfake refers to manipulated videos, or other digital representations produced by sophisticated artificial intelligence, that produce fabricated images and sounds that appear to be real. But the danger of that is "the technology can be used to make people believe something is real when it is not," said Peter Singer, cybersecurity and defense-focused strategist and senior fellow at New America think tank.
The earliest roots of deepfakes were a source of social media fun. Anyone capable of taking a selfie could superimpose their face onto a super model's body and share it for all of their followers to see. Users could also apply any one of the ubiquitous face filters that allow you to add some floppy dog ears or bunny whiskers to your Instagram photos. These types of distorted images were the first incarnations of the deepfake era, and until recently, it was harmless. Today, however, deepfakes are shaking the very foundation of our trust in what we see, hear and believe, to the point that we're not sure what is real and what is fake.
Deepfakes are spreading fast, and while some have playful intentions, others can cause serious harm. We stepped inside this deceptive new world to see what experts are doing to catch this altered content. Chances are you've seen a deepfake; Donald Trump, Barack Obama, and Mark Zuckerberg have all been targets of the computer-generated replications. A deepfake is a video or an audio clip where deep learning models create versions of people saying and doing things that have never actually happened. A good deepfake can chip away at our ability to discern fact from fiction, testing whether seeing is really believing.