fawke
FairDeFace: Evaluating the Fairness and Adversarial Robustness of Face Obfuscation Methods
Khorzooghi, Seyyed Mohammad Sadegh Moosavi, Thota, Poojitha, Singhal, Mohit, Asudeh, Abolfazl, Das, Gautam, Nilizadeh, Shirin
The lack of a common platform and benchmark datasets for evaluating face obfuscation methods has been a challenge, with every method being tested using arbitrary experiments, datasets, and metrics. While prior work has demonstrated that face recognition systems exhibit bias against some demographic groups, there exists a substantial gap in our understanding regarding the fairness of face obfuscation methods. Providing fair face obfuscation methods can ensure equitable protection across diverse demographic groups, especially since they can be used to preserve the privacy of vulnerable populations. To address these gaps, this paper introduces a comprehensive framework, named FairDeFace, designed to assess the adversarial robustness and fairness of face obfuscation methods. The framework introduces a set of modules encompassing data benchmarks, face detection and recognition algorithms, adversarial models, utility detection models, and fairness metrics. FairDeFace serves as a versatile platform where any face obfuscation method can be integrated, allowing for rigorous testing and comparison with other state-of-the-art methods. In its current implementation, FairDeFace incorporates 6 attacks, and several privacy, utility and fairness metrics. Using FairDeFace, and by conducting more than 500 experiments, we evaluated and compared the adversarial robustness of seven face obfuscation methods. This extensive analysis led to many interesting findings both in terms of the degree of robustness of existing methods and their biases against some gender or racial groups. FairDeFace also uses visualization of focused areas for both obfuscation and verification attacks to show not only which areas are mostly changed in the obfuscation process for some demographics, but also why they failed through focus area comparison of obfuscation and verification.
- North America > United States > Massachusetts (0.14)
- North America > United States > Illinois (0.14)
- Europe > Italy (0.14)
- Research Report > Promising Solution (0.48)
- Research Report > New Finding (0.46)
2024 Innovator of the Year: Shawn Shan builds tools to help artists fight back against exploitative AI
Now artists are fighting back. And some of the most powerful tools they have were built by Shawn Shan, 26, a PhD student in computer science at the University of Chicago (and MIT Technology Review's 2024 Innovator of the Year). Shan got his start in AI security and privacy as an undergraduate there and participated in a project that built Fawkes, a tool to protect faces from facial recognition technology. But it was conversations with artists who had been hurt by the generative AI boom that propelled him into the middle of one of the biggest fights in the field. Soon after learning about the impact on artists, Shan and his advisors Ben Zhao (who made our Innovators Under 35 list in 2006) and Heather Zheng (who was on the 2005 list) decided to build a tool to help. They gathered input from more than a thousand artists to learn what they needed and how they would use any protective technology.
- Information Technology > Security & Privacy (0.53)
- Law > Litigation (0.33)
Hackers Fool Facial Recognition Into Thinking I'm Mark Zuckerberg
It's not the first time researchers have created methods for subverting computer vision systems. Last year, researchers at the University of Chicago released Fawkes, a publicly available privacy tool designed to defeat facial recognition. Shawn Shan, a PhD student and co-creator of Fawkes, told Motherboard that, based on the information Adversa AI has made public, its technique seems feasible for defeating publicly available recognition systems. State-of-the-art systems may prove harder, he said.
Ways To Stop AI From Recognizing Your Face In Selfies
Fawkes may prevent a new facial recognition system from recognizing a person but can't change or sabotage the existing systems that have already been trained on one's unprotected images. Thus, Valeriia Cherepanova and her colleagues at the University of Maryland, one of the teams at ICLR, recently addressed this issue and developed a tool called LowKey. This tool expands on Fawkes by applying perturbations to images based on a stronger adversarial attack, which can also fool the pretrained commercial models.
Worried About Privacy for Your Selfies? These Tools Can Help Spoof Facial Recognition AI
Ever wondered what happens to a selfie you upload on a social media site? Activists and researchers have long warned about data privacy and said that photographs uploaded on the Internet may be used to train artificial intelligence (AI) powered facial recognition tools. These AI-enabled tools (such as Clearview, AWS Rekognition, Microsoft Azure, and Face) could in turn be used by governments or other institutions to track people and even draw conclusions such as the subject's religious or political preferences. Researchers have come up with ways to dupe or spoof these AI tools from being able to recognise or even detect a selfie, using adversarial attacks – or a way to alter input data that causes a deep-learning model to make mistakes. Two of these methods were presented last week at the International Conference of Learning Representations (ICLR), a leading AI conference that was held virtually.
How to stop AI from recognizing your face in selfies
A number of AI researchers are pushing back and developing ways to make sure AIs can't learn from personal data. Two of the latest are being presented this week at ICLR, a leading AI conference. "I don't like people taking things from me that they're not supposed to have," says Emily Wenger at the University of Chicago, who developed one of the first tools to do this, called Fawkes, with her colleagues last summer: "I guess a lot of us had a similar idea at the same time." Actions like deleting data that companies have on you, or deliberating polluting data sets with fake examples, can make it harder for companies to train accurate machine-learning models. But these efforts typically require collective action, with hundreds or thousands of people participating, to make an impact.
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models
Chen, Liuqiao, Wang, Hu, Zhao, Benjamin Zi Hao, Xue, Minhui, Qian, Haifeng
Deep Neural Networks have achieved unprecedented success in the field of face recognition such that any individual can crawl the data of others from the Internet without their explicit permission for the purpose of training high-precision face recognition models, creating a serious violation of privacy. Recently, a well-known system named Fawkes (published in USENIX Security 2020) claimed this privacy threat can be neutralized by uploading cloaked user images instead of their original images. In this paper, we present Oriole, a system that combines the advantages of data poisoning attacks and evasion attacks, to thwart the protection offered by Fawkes, by training the attacker face recognition model with multi-cloaked images generated by Oriole. Consequently, the face recognition accuracy of the attack model is maintained and the weaknesses of Fawkes are revealed. Experimental results show that our proposed Oriole system is able to effectively interfere with the performance of the Fawkes system to achieve promising attacking results. Our ablation study highlights multiple principal factors that affect the performance of the Oriole system, including the DSSIM perturbation budget, the ratio of leaked clean user images, and the numbers of multi-cloaks for each uncloaked image. We also identify and discuss at length the vulnerabilities of Fawkes. We hope that the new methodology presented in this paper will inform the security community of a need to design more robust privacy-preserving deep learning models.
- Oceania > Australia > South Australia > Adelaide (0.04)
- Oceania > Australia > New South Wales (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- (4 more...)
Protect Your Profile Photo with a Privacy Cloak
Do we give importance to the privacy of our profile photos publicly available around social media? Have we ever bothered about privacy when we share innumerable photos of friends and family members on Facebook or Instagram? But why should we pay importance to the privacy protection of photos in the first place? We should because our publicly available photos could be utilized for unauthorized facial recognition and that can invade our private lives. There is little doubt that facial recognition is a serious threat to privacy.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.87)
Can This AI Filter Protect Identities From Facial Recognition System?
Facial recognition technology has been a matter of grave concern since long, as much as to that, major tech giants like Microsoft, Amazon, IBM as well as Google have earlier this year, banned selling their FRT to police authorities. Additionally, Clearview AI's groundbreaking facial recognition app that scrapped billions of images of people without consent made the matter even worse for the public. In fact, the whole concept of companies using social media images of people without their permission to train their FRT algorithms can turn out to be troublesome for the general public's identity and personal privacy. And thus, to protect human identities from companies who can misuse them, researchers from the computer science department of the University of Chicago, proposed an AI system to fool these facial recognition systems. Termed as Fawkes -- named after the British soldier Guy Fawkes Night, this AI system has been designed to help users to safeguard their images and selfies with a filter from against these unfavored facial recognition models. This filter, as the researchers called it "cloak," adds an invisible pixel-level change on the photos that cannot be seen with human eyes, but can deceive these FRTs.