Can AI Detect Deepfakes To Help Ensure Integrity of U.S. 2020 Elections?

IEEE Spectrum Robotics

A perfect storm arising from the world of pornography may threaten the U.S. elections in 2020 with disruptive political scandals having nothing to do with actual affairs. Instead, face-swapping "deepfake" technology that first became popular on porn websites could eventually generate convincing fake videos of politicians saying or doing things that never happened in real life--a scenario that could sow widespread chaos if such videos are not flagged and debunked in time. The thankless task of debunking fake images and videos online has generally fallen upon news reporters, fact-checking websites and some sharp-eyed good Samaritans. But the more recent rise of AI-driven deepfakes that can turn Hollywood celebrities and politicians into digital puppets may require additional fact-checking help from AI-driven detection technologies. An Amsterdam-based startup called Deeptrace Labs aims to become one of the go-to shops for such deepfake detection technologies.


As Deepfake Videos Spread, Blockchain Can Be Used to Stop Them Criptalk.com

#artificialintelligence

At a time when the term "fake news" has become a household name thanks to its repeated use by President Donald Trump, deepfakes -- i.e., seemingly realistic videos that are in fact manipulated -- can further escalate the problem associated with distrust of media. Technologists are looking at the inherent nature of blockchain as aggregators of trust to put more public confidence back into the system. Truth is increasingly becoming a relative term. When everyone has their own version of the truth, democracy becomes meaningless. The advent of deepfakes is surely pushing society to a point where facts can be manufactured according to one's opinions and objectives -- because in just a few years, the naked eye or ear will no longer suffice in telling whether a video or audio clip is genuine.


How The Wall Street Journal is preparing its journalists to detect deepfakes

#artificialintelligence

Artificial intelligence is fueling the next phase of misinformation. The new type of synthetic media known as deepfakes poses major challenges for newsrooms when it comes to verification. This content is indeed difficult to track: Can you tell which of the images below is a fake? We at The Wall Street Journal are taking this threat seriously and have launched an internal deepfakes task force led by the Ethics & Standards and the Research & Development teams. This group, the WSJ Media Forensics Committee, is comprised of video, photo, visuals, research, platform, and news editors who have been trained in deepfake detection.


'A definite threat': The fake video phenomenon taking over the internet

#artificialintelligence

You might not be aware of it, but there's a quiet arms race going on over our collective reality. The fight is between those who want to subvert it and usher in a world where we no longer believe what we see on our screens and those who want to help preserve the status quo. Up until this point in time, we have largely trusted our eyes and ears when consuming audio and visual media content, but new technological systems that create something known as deepfakes, are changing that. And as these deepfake videos nudge into the mainstream, experts are increasingly worried about the ramifications it will have on the information sharing that underpins society. Dr Richard Nock is the head of machine learning at CSIRO's Data 61 and understands the daunting potential of the technology that powers deepfake videos.


A Deepfake Deep Dive into the Murky World of Digital Imitation

#artificialintelligence

About a year ago, top deepfake artist Hao Li came to a disturbing realization: Deepfakes, i.e. the technique of human-image synthesis based on artificial intelligence (AI) to create fake content, is rapidly evolving. In fact, Li believes that in as soon as six months, deepfake videos will be completely undetectable. And that's spurring security and privacy concerns as the AI behind the technology becomes commercialized – and gets in the hands of malicious actors. Li, for his part, has seen the positives of the technology as a pioneering computer graphics and vision researcher, particularly for entertainment. He has worked his magic on various high-profile deepfake applications – from leading the charge in putting Paul Walker into Furious 7 after the actor died before the film finished production, to creating the facial-animation technology that Apple now uses in its Animoji feature in the iPhone X.