deepfake attack
Detecting Deepfakes Without Seeing Any
Reiss, Tal, Cavia, Bar, Hoshen, Yedid
Deepfake attacks, malicious manipulation of media containing people, are a serious concern for society. Conventional deepfake detection methods train supervised classifiers to distinguish real media from previously encountered deepfakes. Such techniques can only detect deepfakes similar to those previously seen, but not zeroday (previously unseen) attack types. As current deepfake generation techniques are changing at a breathtaking pace, new attack types are proposed frequently, making this a major issue. Our main observations are that: i) in many effective deepfake attacks, the fake media must be accompanied by false facts i.e. claims about the identity, speech, motion, or appearance of the person. For instance, when impersonating Obama, the attacker explicitly or implicitly claims that the fake media show Obama; ii) current generative techniques cannot perfectly synthesize the false facts claimed by the attacker. We therefore introduce the concept of "fact checking", adapted from fake news detection, for detecting zero-day deepfake attacks. Fact checking verifies that the claimed facts (e.g. Consequently, we introduce FACTOR, a practical recipe for deepfake fact checking and demonstrate its power in critical attack settings: face swapping and audio-visual synthesis. Although it is trainingfree, relies exclusively on off-the-shelf features, is very easy to implement, and does not see any deepfakes, it achieves better than state-of-the-art accuracy. Our code is available at https://github.com/talreiss/FACTOR. The ability to disseminate large-scale disinformation to undermine scientifically established facts poses an existential risk to humanity and endangers democratic institutions and fundamental human rights. Deepfakes have been universally acknowledged to pose a grave threat to society. Bad actors can use fake information for various malicious purposes, including disinformation, societal polarization, embarrassment, and privacy violations.
- North America > United States (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
Deepfake attacks can easily trick facial recognition
In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report. Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user. So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example.
Deepfakes Can Effectively Fool Many Major Facial 'Liveness' APIs
A new research collaboration between the US and China has probed the susceptibility to deepfakes of some of the biggest face-based authentication systems in the world, and found that most of them are vulnerable to developing and emerging forms of deepfake attack. The research conducted deepfake-based intrusions using a custom framework deployed against Facial Liveness Verification (FLV) systems that are commonly supplied by major vendors, and sold as a service to downstream clients such as airlines and insurance companies. Facial Liveness is intended to repel the use of techniques such as adversarial image attacks, the use of masks and pre-recorded video, so-called'master faces', and other forms of visual ID cloning. The study concludes that the limited number of deepfake-detection modules deployed in these systems, many of which serve millions of customers, are far from infallible, and may have been configured on deepfake techniques that are now outmoded, or may be too architecture-specific. '[Different] deepfake methods also show variations across different vendors…Without access to the technical details of the target FLV vendors, we speculate that such variations are attributed to the defense measures deployed by different vendors.
- Asia > China (0.37)
- Asia > Middle East > UAE (0.15)
- North America > United States > Pennsylvania (0.05)
- (2 more...)
Deepfake Attacks Are About to Surge, Experts Warn
Artificial intelligence and the rise of deepfake technology is something cybersecurity researchers have cautioned about for years and now it's officially arrived. Cybercriminals are increasingly sharing, developing and deploying deepfake technologies to bypass biometric security protections, and in crimes including blackmail, identity theft, social engineering-based attacks and more, experts warn. Time to get those cybersecurity defenses ready. Join Threatpost for "Fortifying Your Business Against Ransomware, DDoS & Cryptojacking Attacks" a LIVE roundtable event on Wednesday, May 12 at 2:00 PM EDT for this FREE webinar sponsored by Zoho ManageEngine. A drastic uptick in deepfake technology and service offerings across the Dark Web is the first sign a new wave of fraud is just about to crash in, according to a new report from Recorded Future, which ominously predicted that deepfakes are on the rise among threat actors with an enormous range of goals and interests.