San Francisco (CNN)Deepfake videos are quickly becoming a problem, but there has been much debate about just how big the problem really is. One company is now trying to put a number on it. There are at least 14,678 deepfake videos -- and counting -- on the internet, according to a recent tally by a startup that builds technology to spot this kind of AI-manipulated content. And nearly all of them are porn. The number of deepfake videos is 84% higher than it was last December when Amsterdam-based Deeptrace found 7,964 deepfake videos during its first online count.
While Americans celebrated a long Labor Day weekend, millions of people in China enrolled in a giant experiment in the future of fake video. An app called Zao that can swap a person's face into movie and TV clips, including from Game of Thrones, went viral on Apple's Chinese app store. The app is popular because making and sharing such clips is fun, but some Western observers' thoughts turned to something more sinister. Zao's viral moment was quickly connected with the idea that US politicians are vulnerable to deepfakes, video or audio fabricated using artificial intelligence to show a person doing or saying something they did not do or say. That threat has been promoted by US lawmakers themselves, including at a recent House Intelligence Committee hearing on deepfakes.
Deepfakes have been known to make politicians appear to do and say unusual things. While some deepfakes are silly and fun, others are misleading and even abusive. Two new California laws aim to put a stop to these more nefarious video forgeries. California Gov. Gavin Newsom on Thursday signed AB 730, which makes it illegal to distribute manipulated videos that aim to discredit a political candidate and deceive voters within 60 days of an election. He also signed AB 602, which gives Californians the right to sue someone who creates deepfakes that place them in pornographic material without consent.
As is the case with many technological developments, 'deepfakes' -- videos in which someone who did not originally appear in the clip is rendered into it using artificial intelligence (AI) -- largely started in the world of pornography. Viewers, should they so desire, can now watch videos of their favourite musicians and film stars "in action," although that celebrity was never in that video. In these cases, increasingly sophisticated tools are used to put the musicians and film stars' faces onto pre-existing pornographic videos. There can obviously be a sinister, non-celebrity side to this too. The recent Sam Bourne novel, To Kill The Truth, features a protagonist Maggie Costello who appears in such a video as part of a cruel plot to undermine her.
A perfect storm arising from the world of pornography may threaten the U.S. elections in 2020 with disruptive political scandals having nothing to do with actual affairs. Instead, face-swapping "deepfake" technology that first became popular on porn websites could eventually generate convincing fake videos of politicians saying or doing things that never happened in real life--a scenario that could sow widespread chaos if such videos are not flagged and debunked in time. The thankless task of debunking fake images and videos online has generally fallen upon news reporters, fact-checking websites and some sharp-eyed good Samaritans. But the more recent rise of AI-driven deepfakes that can turn Hollywood celebrities and politicians into digital puppets may require additional fact-checking help from AI-driven detection technologies. An Amsterdam-based startup called Deeptrace Labs aims to become one of the go-to shops for such deepfake detection technologies.