Yes, these are amazing places. I'm sure you've used one at least once. Yet, while a few types of media are clearly edited, different changes might be harder to spot. You may have heard the term "deepfake videos" recently. It originally came to fruition in 2017 to depict videos and pictures that incorporate deep learning algorithms to create videos and images that look real.
Hany Farid, a digital forensics expert at UC Berkeley, says the dangers in sophisticated phony videos called "deepfakes" are amplified in their potential to travel rapidly across social media. Hany Farid, a digital forensics expert at UC Berkeley, says the dangers in sophisticated phony videos called "deepfakes" are amplified in their potential to travel rapidly across social media. The videos, uploaded to TikTok in recent weeks by the account @deeptomcruise, have raised new fears over the proliferation of convincing deepfakes -- the nickname for media generated by artificial intelligence technology showing phony events that often seem realistic enough to dupe an audience. Hany Farid, a professor at the University of California, Berkeley, told NPR's All Things Considered that the Cruise videos demonstrate a step up in the technology's evolving sophistication. "This is clearly a new category of deepfake that we have not seen before," said Farid, who researches digital forensics and misinformation.
Earlier this year, videos of Tom Cruise started popping up on TikTok of the actor doing some surprisingly un-Tom-Cruise-like stuff: goofing around in an upscale men's clothing store; showing off a coin trick; growling playfully during a short rendition of Dave Matthews Band's "Crash Into Me." In one video, he bites into a lollipop and is amazed to find gum in the center. "Mmmmm," he says to the camera. How come nobody ever told me there's bubblegum? The 10 videos, which were posted between February and June, featured an artificial intelligence-generated doppelganger meant to look and sound like him.
Computer scientists have developed a tool that detects deepfake photos with near-perfect accuracy. The system, which analyzes light reflections in a subject's eyes, proved 94 percent effective in experiments. In real portraits, the light reflected in our eyes is generally in the same shape and color, because both eyes are looking at the same thing. Since deepfakes are composites made from many different photos, most omit this crucial detail. Deepfakes became a particular concern during the 2020 US presidential election, raising concerns they'd be use to discredit candidates and spread disinformation.
Deepfakes, or AI-generated videos that take a person in an existing video and replace them with someone else's likeness, are multiplying at an accelerating rate. According to startup Deeptrace, the number of deepfakes on the web increased 330% from October 2019 to June 2020, reaching over 50,000 at their peak. That's troubling not only because these fakes might be used to sway opinion during an election or implicate a person in a crime, but because they've already been abused to generate pornographic material of actors and defraud a major energy producer. While much of the discussion to date around deepfakes has focused on social media, pornography, and fraud, it's worth noting that deepfakes pose a threat to people portrayed in manipulated videos and their circle of trust. As a result, deepfakes also represent an existential threat to businesses, particularly in industries that depend on digital media to make important decisions.