Goto

Collaborating Authors

Comparing Human and Machine Deepfake Detection with Affective and Holistic Processing

arXiv.org Artificial Intelligence

The recent emergence of deepfake videos leads to an important societal question: how can we know if a video that we watch is real or fake? In three online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary participants against the leading computer vision deepfake detection model and find them similarly accurate while making different kinds of mistakes. Together, participants with access to the model's prediction are more accurate than either alone, but inaccurate model predictions often decrease participants' accuracy. We embed randomized experiments and find: incidental anger decreases participants' performance and obstructing holistic visual processing of faces also hinders participants' performance while mostly not affecting the model's. These results suggest that considering emotional influences and harnessing specialized, holistic visual processing of ordinary people could be promising defenses against machine-manipulated media.


Deepfakes and deep media: A new security battleground

#artificialintelligence

That's troubling not only because these fakes might be used to sway opinions during an election or implicate a person in a crime, but because they've already been abused to generate pornographic material of actors and defraud a major energy producer. In anticipation of this new reality, a coalition of academic institutions, tech firms, and nonprofits are developing ways to spot misleading AI-generated media. Their work suggests that detection tools are a viable short-term solution but that the deepfake arms race is just beginning. The best AI-produced prose used to be closer to Mad Libs than The Grapes of Wrath, but cutting-edge language models can now write with humanlike pith and cogency. San Francisco research firm OpenAI's GPT-2 takes seconds to craft passages in the style of a New Yorker article or brainstorm game scenarios.


In the battle against deepfakes, AI is being pitted against AI

#artificialintelligence

Lying has never looked so good, literally. Concern over increasingly sophisticated technology able to create convincingly faked videos and audio, so-called'deepfakes', is rising around the world. But at the same time they're being developed, technologists are also fighting back against the falsehoods. "The concern is that there will be a growing movement globally to undermine the quality of the information sphere and undermine the quality of discourse necessary in a democracy," Eileen Donahoe, a member of the Transatlantic Commission on Election Integrity, told CNBC in December 2018. She said deepfakes are potentially the next generation of disinformation.


AASIST: Audio Anti-Spoofing using Integrated Spectro-Temporal Graph Attention Networks

arXiv.org Artificial Intelligence

Artefacts that differentiate spoofed from bona-fide utterances can reside in spectral or temporal domains. Their reliable detection usually depends upon computationally demanding ensemble systems where each subsystem is tuned to some specific artefacts. We seek to develop an efficient, single system that can detect a broad range of different spoofing attacks without score-level ensembles. We propose a novel heterogeneous stacking graph attention layer which models artefacts spanning heterogeneous temporal and spectral domains with a heterogeneous attention mechanism and a stack node. With a new max graph operation that involves a competitive mechanism and an extended readout scheme, our approach, named AASIST, outperforms the current state-of-the-art by 20% relative. Even a lightweight variant, AASIST-L, with only 85K parameters, outperforms all competing systems.


DeepFake MNIST+: A DeepFake Facial Animation Dataset

arXiv.org Artificial Intelligence

The DeepFakes, which are the facial manipulation techniques, is the emerging threat to digital society. Various DeepFake detection methods and datasets are proposed for detecting such data, especially for face-swapping. However, recent researches less consider facial animation, which is also important in the DeepFake attack side. It tries to animate a face image with actions provided by a driving video, which also leads to a concern about the security of recent payment systems that reply on liveness detection to authenticate real users via recognising a sequence of user facial actions. However, our experiments show that the existed datasets are not sufficient to develop reliable detection methods. While the current liveness detector cannot defend such videos as the attack. As a response, we propose a new human face animation dataset, called DeepFake MNIST+, generated by a SOTA image animation generator. It includes 10,000 facial animation videos in ten different actions, which can spoof the recent liveness detectors. A baseline detection method and a comprehensive analysis of the method is also included in this paper. In addition, we analyze the proposed dataset's properties and reveal the difficulty and importance of detecting animation datasets under different types of motion and compression quality.