Goto

Collaborating Authors

Scientists prove that deepfake detectors can be duped

Engadget

Universities, organizations and tech giants, such as Microsoft and Facebook, have been working on tools that can detect deepfakes in an effort to prevent their use for the spread of malicious media and misinformation. Deepfake detectors, however, can still be duped, a group of computer scientists from UC San Diego has warned. The team showed how detection tools can be fooled by inserting inputs called "adversarial examples" into every video frame at the WACV 2021 computer vision conference that took place online in January. In their announcement, the scientists explained that adversarial examples are manipulated images that can cause AI systems to make a mistake. See, most detectors work by tracking faces in videos and sending cropped face data to a neural network -- deepfake videos are convincing because they were modified to copy a real person's face, after all.


Explained: Why it is becoming more difficult to detect deepfake videos and what are the implications

#artificialintelligence

Doctored videos or deepfakes have been one of the key weapons used in propaganda battles for quite some time now. Donald Trump taunting Belgium for remaining in the Paris climate agreement, David Beckham speaking fluently in nine languages, Mao Zedong singing'I will survive' or Jeff Bezos and Elon Musk in a pilot episode of Star Trek… all these videos have gone viral despite being fake, or because they were deepfakes. Last year, Marco Rubio, the Republican senator from Florida, said deepfakes are as potent as nuclear weapons in waging wars in a democracy. "In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles. Today, you just need access to our Internet system, to our banking system, to our electrical grid and infrastructure, and increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply," Forbes quoted him as saying.


A very tiny alteration can help deepfakes escape detection

#artificialintelligence

Last month, Sophie Wilmès, the prime minister of Belgium, appeared in an online video to tell her audience that the COVID-19 pandemic was linked to the "exploitation and destruction by humans of our natural environment." Whether or not these two existential crises are connected, the fact is that Wilmès said no such thing. Produced by an organization of climate change activists, the video was actually a deepfake, or a form of fake media created using deep learning. Deepfakes are yet another way to spread misinformation--as if there wasn't enough fake news about the pandemic already. Because new security measures consistently catch many deepfake images and videos, people may be lulled into a false sense of security and believe we have the situation under control.


Adversarial Perturbations Fool Deepfake Detectors

arXiv.org Machine Learning

This work uses adversarial perturbations to enhance deepfake images and fool common deepfake detectors. We created adversarial perturbations using the Fast Gradient Sign Method and the Carlini and Wagner L2 norm attack in both blackbox and whitebox settings. Detectors achieved over 95% accuracy on unperturbed deepfakes, but less than 27% accuracy on perturbed deepfakes. We also explore two improvements to deepfake detectors: (i) Lipschitz regularization, and (ii) Deep Image Prior (DIP). Lipschitz regularization constrains the gradient of the detector with respect to the input in order to increase robustness to input perturbations. The DIP defense removes perturbations using generative convolutional neural networks in an unsupervised manner. Regularization improved the detection of perturbed deepfakes on average, including a 10% accuracy boost in the blackbox case. The DIP defense achieved 95% accuracy on perturbed deepfakes that fooled the original detector, while retaining 98% accuracy in other cases on a 100 image subsample.


Can AI Detect Deepfakes To Help Ensure Integrity of U.S. 2020 Elections?

IEEE Spectrum Robotics

A perfect storm arising from the world of pornography may threaten the U.S. elections in 2020 with disruptive political scandals having nothing to do with actual affairs. Instead, face-swapping "deepfake" technology that first became popular on porn websites could eventually generate convincing fake videos of politicians saying or doing things that never happened in real life--a scenario that could sow widespread chaos if such videos are not flagged and debunked in time. The thankless task of debunking fake images and videos online has generally fallen upon news reporters, fact-checking websites and some sharp-eyed good Samaritans. But the more recent rise of AI-driven deepfakes that can turn Hollywood celebrities and politicians into digital puppets may require additional fact-checking help from AI-driven detection technologies. An Amsterdam-based startup called Deeptrace Labs aims to become one of the go-to shops for such deepfake detection technologies.