Goto

Collaborating Authors

Lawmakers Warn About Threat of Political Deepfakes by Creating One

#artificialintelligence

Michael Waltz (R-FL) and Don Beyer (D-VA) produced a deepfake video for the U.S. House Science subcommittee to demonstrate the threat such disinformation presents. Michael Waltz (R-FL) and Don Beyer (D-VA) produced an artificial intelligence-doctored political video, or deepfake, for the U.S. House Science subcommittee to demonstrate the threat such disinformation presents. Lawmakers are worried of malefactors using deepfakes to disrupt and divide U.S. voters in the run-up to the 2020 election, and Waltz and Beyer are urging investment in deepfake-detection solutions, especially as production tools become increasingly affordable and accessible. State University of New York at Albany's Siwei Lyu, who helped craft the deepfake demo, said his software could generate deepfakes of a minute-long YouTube video in eight hours. Meanwhile, the University of California, Berkeley's Hany Farid cited the sluggish progress of technology platforms like Facebook and Google to address deepfakes.


'Deepfake' videos are pushing the boundaries of digital media

#artificialintelligence

As fake videos generated by AI continue to become more convincing, what was once a tool to share laughs on the internet has grown into a worrying sector of digital media. Whether it's a viral video of "Tom Cruise" doing a magic trick or "Facebook's Mark Zuckerberg" boasting about having "total control of billions of people's stolen data," deepfake videos have the capacity to cause real harm to people who fall for their deception. A Pennsylvania woman was charged last weekend with allegedly making deepfake videos of girls on a cheerleading team her daughter used to belong to – the videos showed the girls nude, smoking or partying – in an attempt to get them kicked off the team. Graphic artist Chris Ume, the mastermind behind the Tom Cruise TikTok deepfake, told CTV News that when he started making deepfake videos it was just to "have good fun." But now as manipulated media continues to make headlines, his views have changed.


How puny humans can spot devious deepfakes

#artificialintelligence

In June, a video allegedly showing Datuk Seri Azmin Ali, the Malaysian minister of economic affairs, engaged in a sexual tryst with Muhammad Haziq Abdul Aziz, a deputy Malaysian minister's secretary, surfaced online. The video spread like wildfire, and subsequently threw the country's media into a frenzy. The video had real-world consequences, and Abdul Aziz, who in the eyes of the government had committed a crime, was quickly arrested. But, according to Malaysia's prime minister, the video was just one of countless other scarily-accurate deepfake videos that have been finding their way onto the internet in the last year. Deepfakes work by using something called a generative adversarial network (GAN), which is made up of two artificial intelligent processes that are pitted against each other – a generator and a discriminator.


Deepfakes aren't very good--nor are the tools to detect them – Ars Technica

#artificialintelligence

The best deepfake detector to emerge from a major Facebook-led effort to combat the altered videos would only catch about two-thirds of them. In September, as speculation about the danger of deepfakes grew, Facebook challenged artificial intelligence wizards to develop techniques for detecting deepfake videos. In January, the company also banned deepfakes used to spread misinformation. Facebook's Deepfake Detection Challenge, in collaboration with Microsoft, Amazon Web Services, and the Partnership on AI, was run through Kaggle, a platform for coding contests that is owned by Google. It provided a vast collection of face-swap videos: 100,000 deepfake clips, created by Facebook using paid actors, on which entrants tested their detection algorithms.


Deepfakes Aren't Very Good. Nor Are the Tools to Detect Them

WIRED

The best deepfake detector to emerge from a major Facebook-led effort to combat the altered videos would only catch about two-thirds of them. In September, as speculation about the danger of deepfakes grew, Facebook challenged artificial intelligence wizards to develop techniques for detecting deepfake videos. In January, the company also banned deepfakes used to spread misinformation. Facebook's Deepfake Detection Challenge, in collaboration with Microsoft, Amazon Web Services, and the Partnership on AI, was run through Kaggle, a platform for coding contests that is owned by Google. It provided a vast collection of face-swap videos: 100,000 deepfake clips, created by Facebook using paid actors, on which entrants tested their detection algorithms.