Deepfake Detectors can be Defeated, Computer Scientists Show for the First Time


Systems designed to detect deepfakes--videos that manipulate real-life footage via artificial intelligence--can be deceived, computer scientists showed for the first time at the WACV 2021 conference which took place online Jan. 5 to 9, 2021. Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed. "Our work shows that attacks on deepfake detectors could be a real-world threat," said Shehzeen Hussain, a UC San Diego computer engineering Ph.D. student and first co-author on the WACV paper.

Duplicate Docs Excel Report

None found

Similar Docs  Excel Report  more

None found