In November 2017, a Reddit account called deepfakes posted pornographic clips made with software that pasted the faces of Hollywood actresses over those of the real performers. Nearly two years later, deepfake is a generic noun for video manipulated or fabricated with artificial intelligence software. The technique has drawn laughs on YouTube, along with concern from lawmakers fearful of political disinformation. Yet a new report that tracked the deepfakes circulating online finds they mostly remain true to their salacious roots. Startup Deeptrace took a kind of deepfake census during June and July to inform its work on detection tools it hopes to sell to news organizations and online platforms.
Deepfake technology is an evolving form of artificial intelligence that's adept at making you believe certain media is real, when in fact it's a compilation of doctored images and audio designed to fool you. A surge in what's known as "fake news" shows how deepfake videos can trick audiences into believing made-up stories. The term deepfake melds two words: deep and fake. It combines the concept of machine or deep learning with something that isn't real. Deepfakes are artificial images and sounds put together with machine-learning algorithms.
Encountering altered videos and photoshopped images is almost a rite of passage on the internet. It's rare these days that you'd visit social media and not come across some form of edited content -- whether that be a simple selfie with a filter, a highly embellished meme or a video edited to add a soundtrack or enhance certain elements. But while some forms of media are obviously edited, other alterations may be harder to spot. You may have heard the term "deepfake" in recent years -- it first came about in 2017 to describe videos and images that implement deep learning algorithms to create videos and images that look real. For example, take the moon disaster speech given by former president Richard Nixon when the Apollo 11 team crashed into the lunar surface.