In a world where your online identity links to you directly, the prospect of perfect replication is worrying. But that's exactly what we face with the advent of deepfake technology. As the technology becomes cheaper and easier to use, what are the dangers of deepfakes? Furthermore, how can you spot a deepfake versus the real deal? A deepfake is the name given to media where a person in the video or image is replaced with someone else's likeness.
How to identify and respond to "Deepfake" videos -- realistic AI-synthesized video generated for the purpose of spreading misinformation -- is a challenge that has been highlighted by recent social media stumbles on the question, particularly from Facebook. Several months ago Facebook was criticized for failing to remove a viral video manipulated to make US House Speaker Nancy Pelosi sound drunk. In collaboration with Partnership on AI, Microsoft, and academics from top universities, Facebook today announced the Deepfake Detection Challenge (DFDC) with the aim of finding innovative deepfake detection solutions to help the media industry spot videos that have been morphed by AI models. The challenge includes a dataset of video pairs (originals filmed by paid actors and tampered versions generated by various AI techniques). Facebook says no actual Facebook user data will be used, and has pledged US$10 million to encourage global participation in the challenge.
There are positive uses for deepfake technology like making digital voices for people who lost theirs or updating film footage instead of reshooting it if actors trip over their lines. However, the potential for malicious use is of grave concern, especially as the technology gets more refined. There has been tremendous progress in the quality of deepfakes since only a few years ago when the first products of the technology circulated. Since that time, many of the scariest examples of artificial intelligence (AI)-enabled deepfakes have technology leaders, governments, and media talking about the perils it could create for communities. The first exposure to deepfakes for most of the general public happened in 2017.
While it looks like another tale of internet magic, it points to something darker stirring in the internet's depths. This story was originally published August 19, 2019. The video exists thanks to deepfake technology and while its realism is still in its infancy, it's fast becoming one of the most terrifying developments in technology. To better understand how it works and what it means the future, we peeked under the covers. A update to Virginia, U.S.A.'s law against revenge porn banning the distribution of videos and images that have been deepfaked - modified using machine learning algorithms to picture someone else - or otherwise created with the intent to "coerce, harass, or intimidate" a victim went into effect on Tuesday, per CNET.
San Francisco (CNN)Deepfake videos are quickly becoming a problem, but there has been much debate about just how big the problem really is. One company is now trying to put a number on it. There are at least 14,678 deepfake videos -- and counting -- on the internet, according to a recent tally by a startup that builds technology to spot this kind of AI-manipulated content. And nearly all of them are porn. The number of deepfake videos is 84% higher than it was last December when Amsterdam-based Deeptrace found 7,964 deepfake videos during its first online count.