Goto

Collaborating Authors

 facial liveness verification


Deepfakes expose vulnerabilities in certain facial recognition technology

#artificialintelligence

Mobile devices use facial recognition technology to help users quickly and securely unlock their phones, make a financial transaction or access medical records. But facial recognition technologies that employ a specific user-detection method are highly vulnerable to deepfake-based attacks that could lead to significant security concerns for users and applications, according to new research involving the Penn State College of Information Sciences and Technology. The researchers found that most application programming interfaces that use facial liveness verification--a feature of facial recognition technology that uses computer vision to confirm the presence of a live user--don't always detect digitally altered photos or videos of individuals made to look like a live version of someone else, also known as deepfakes. Applications that do use these detection measures are also significantly less effective at identifying deepfakes than what the app provider has claimed. "In recent years we have observed significant development of facial authentication and verification technologies, which have been deployed in many security-critical applications," said Ting Wang, associate professor of information sciences and technology and one principal investigator on the project.


Deepfakes Can Effectively Fool Many Major Facial 'Liveness' APIs

#artificialintelligence

A new research collaboration between the US and China has probed the susceptibility to deepfakes of some of the biggest face-based authentication systems in the world, and found that most of them are vulnerable to developing and emerging forms of deepfake attack. The research conducted deepfake-based intrusions using a custom framework deployed against Facial Liveness Verification (FLV) systems that are commonly supplied by major vendors, and sold as a service to downstream clients such as airlines and insurance companies. Facial Liveness is intended to repel the use of techniques such as adversarial image attacks, the use of masks and pre-recorded video, so-called'master faces', and other forms of visual ID cloning. The study concludes that the limited number of deepfake-detection modules deployed in these systems, many of which serve millions of customers, are far from infallible, and may have been configured on deepfake techniques that are now outmoded, or may be too architecture-specific. '[Different] deepfake methods also show variations across different vendors…Without access to the technical details of the target FLV vendors, we speculate that such variations are attributed to the defense measures deployed by different vendors.