Goto

Collaborating Authors

 facial recognition model


Explainable AI for Analyzing Person-Specific Patterns in Facial Recognition Tasks

Borsukiewicz, Paweł Jakub, Samhi, Jordan, Klein, Jacques, Bissyandé, Tegawendé F.

arXiv.org Artificial Intelligence

The proliferation of facial recognition systems presents major privacy risks, driving the need for effective countermeasures. Current adversarial techniques apply generalized methods rather than adapting to individual facial characteristics, limiting their effectiveness and inconspicuousness. In this work, we introduce Layer Embedding Activation Mapping (LEAM), a novel technique that identifies which facial areas contribute most to recognition at an individual level. Unlike adversarial attack methods that aim to fool recognition systems, LEAM is an explainability technique designed to understand how these systems work, providing insights that could inform future privacy protection research. We integrate LEAM with a face parser to analyze data from 1000 individuals across 9 pre-trained facial recognition models. Our analysis reveals that while different layers within facial recognition models vary significantly in their focus areas, these models generally prioritize similar facial regions across architectures when considering their overall activation patterns, which show significantly higher similarity between images of the same individual (Bhattacharyya Coefficient: 0.32-0.57) vs. different individuals (0.04-0.13), validating the existence of person-specific recognition patterns. Our results show that facial recognition models prioritize the central region of face images (with nose areas accounting for 18.9-29.7% of critical recognition regions), while still distributing attention across multiple facial fragments. Proper selection of relevant facial areas was confirmed using validation occlusions, based on just 1% of the most relevant, LEAM-identified, image pixels, which proved to be transferable across different models. Our findings establish the foundation for future individually tailored privacy protection systems centered around LEAM's choice of areas to be perturbed.


Surveying Facial Recognition Models for Diverse Indian Demographics: A Comparative Analysis on LFW and Custom Dataset

Pant, Pranav, Dadu, Niharika, Singh, Harsh V., Thakur, Anshul

arXiv.org Artificial Intelligence

Facial recognition technology has made significant advances, yet its effectiveness across diverse ethnic backgrounds, particularly in specific Indian demographics, is less explored. This paper presents a detailed evaluation of both traditional and deep learning-based facial recognition models using the established LFW dataset and our newly developed IITJ Faces of Academia Dataset (JFAD), which comprises images of students from IIT Jodhpur. This unique dataset is designed to reflect the ethnic diversity of India, providing a critical test bed for assessing model performance in a focused academic environment. We analyze models ranging from holistic approaches like Eigenfaces and SIFT to advanced hybrid models that integrate CNNs with Gabor filters, Laplacian transforms, and segmentation techniques. Our findings reveal significant insights into the models' ability to adapt to the ethnic variability within Indian demographics and suggest modifications to enhance accuracy and inclusivity in real-world applications. The JFAD not only serves as a valuable resource for further research but also highlights the need for developing facial recognition systems that perform equitably across diverse populations.


PuFace: Defending against Facial Cloaking Attacks for Facial Recognition Models

Wen, Jing

arXiv.org Artificial Intelligence

The recently proposed facial cloaking attacks add invisible perturbation (cloaks) to facial images to protect users from being recognized by unauthorized facial recognition models. However, we show that the "cloaks" are not robust enough and can be removed from images. This paper introduces PuFace, an image purification system leveraging the generalization ability of neural networks to diminish the impact of cloaks by pushing the cloaked images towards the manifold of natural (uncloaked) images before the training process of facial recognition models. Specifically, we devise a purifier that takes all the training images including both cloaked and natural images as input and generates the purified facial images close to the manifold where natural images lie. To meet the defense goal, we propose to train the purifier on particularly amplified cloaked images with a loss function that combines image loss and feature loss. Our empirical experiment shows PuFace can effectively defend against two state-of-the-art facial cloaking attacks and reduces the attack success rate from 69.84\% to 7.61\% on average without degrading the normal accuracy for various facial recognition models. Moreover, PuFace is a model-agnostic defense mechanism that can be applied to any facial recognition model without modifying the model structure.


Decide Whom Your Child Looks Like with Facial Recognition: Mommy or Daddy? - Sefik Ilkin Serengil

#artificialintelligence

Parents do discuss whom their child is looking like but no one can really be convinced about the result with this discussion. Luckily, we have very powerful facial recognition technology nowadays to learn the real and unbiased answer. In this post, we are going to use deepface to decide a child looking more like to which parent. We normally use facial recognition technology to verify face pairs are same person or different persons. Face pairs are represented as multi-dimensional vectors by facial recognition models such as FaceNet.


'Degraded' Synthetic Faces Could Help Improve Facial Image Recognition

#artificialintelligence

Researchers from Michigan State University have devised a way for synthetic faces to take a break from the deepfakes scene and do some good in the world – by helping image recognition systems to become more accurate. The new controllable face synthesis module (CFSM) they've devised is capable of regenerating faces in the style of real-world video surveillance footage, rather than relying on the uniformly higher-quality images used in popular open source datasets of celebrities, which do not reflect all the faults and shortcomings of genuine CCTV systems, such as facial blur, low resolution, and sensor noise – factors that can affect recognition accuracy. CFSM is not intended specifically to authentically simulate head poses, expressions, or all the other usual traits that are the objective of deepfake systems, but rather to generate a range of alternative views in the style of the target recognition system, using style transfer. The system is designed to mimic the style domain of the target system, and to adapt its output according to the resolution and range of'eccentricities' therein. The use-case includes legacy systems that are not likely to be upgraded due to cost, but which can currently contribute little to the new generation of facial recognition technologies, due to poor quality of output that may once have been leading-edge.


Using Makeup to Block Surveillance

Communications of the ACM

Anti-surveillance makeup, used by people who do not want to be identified to fool facial recognition systems, is bold and striking, not exactly the stuff of cloak and daggers. While experts' opinions vary on the makeup's effectiveness to avoid detection, they agree that its use is not yet widespread. Anti-surveillance makeup relies heavily on machine learning and deep learning models to "break up the symmetry of a typical human face" with highly contrasted markings, says John Magee, an associate computer science professor at Clark University in Worcester, MA, who specializes in computer vision research. However, Magee adds that "If you go out [wearing] that makeup, you're going to draw attention to yourself." The effectiveness of anti-surveillance makeup has been debated because of racial justice protesters who do not want to be tracked, Magee notes.


Facial Recognition Model

#artificialintelligence

While going through the UnpackAI Deep Learning boot camp, I have decided to create a classifier of faces as my main project throughout the program. The objective of the experiment was to be able to train a model that can distinguish between three types of humans: European, African, and (East) Asian. In order to accomplish the goal that I set for my model, I needed a dataset of with pictures for each group of humans. Here I needed to put some limitations in order to simplify the model: I have decided to focus on adult males with typical features for each group. In order to collect the dataset, I have used the duckduckgo scraper, using the keywords that I have previously checked directly on the website, making sure that the search results are satisfying.


How Does Your AI Work? Nearly Two-Thirds Can't Say, Survey Finds - AI Summary

#artificialintelligence

Nearly two-thirds of C-level AI leaders can't explain how specific AI decisions or predictions are made, according to a new survey on AI ethics by FICO, which says there is room for improvement. FICO hired Corinium to query 100 AI leaders for its new study, called "The State of Responsible AI: 2021," which the credit report company released today. More than two thirds of survey-takers say the processes they have to ensure AI models comply with regulations are ineffective, while nine out of 10 leaders who took the survey say inefficient monitoring of models presents a barrier to AI adoption. Seeing as how the regulatory environment is still developing, it's concerning that 43% of respondents in FICO's study found that "they have no responsibilities beyond meeting regulatory compliance to ethically manage AI systems whose decisions may indirectly affect people's livelihoods," such as audience segmentation models, facial recognition models, and recommendation systems, the company said. At a time when AI is making life-altering decisions for their customers and stakeholders, the lack of awareness of the ethical and fairness concerns around AI poses a serious risk to companies, says Scott Zoldi, FICO's chief analytics officer.


Protect Your Profile Photo with a Privacy Cloak

#artificialintelligence

Do we give importance to the privacy of our profile photos publicly available around social media? Have we ever bothered about privacy when we share innumerable photos of friends and family members on Facebook or Instagram? But why should we pay importance to the privacy protection of photos in the first place? We should because our publicly available photos could be utilized for unauthorized facial recognition and that can invade our private lives. There is little doubt that facial recognition is a serious threat to privacy.


Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models

Shan, Shawn, Wenger, Emily, Zhang, Jiayun, Li, Huiying, Zheng, Haitao, Zhao, Ben Y.

arXiv.org Machine Learning

Today's proliferation of powerful facial recognition models poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data, and train highly accurate facial recognition models of us without our knowledge. We need tools to protect ourselves from unauthorized facial recognition systems and their numerous potential misuses. Unfortunately, work in related areas are limited in practicality and effectiveness. In this paper, we propose Fawkes, a system that allow individuals to inoculate themselves against unauthorized facial recognition models. Fawkes achieves this by helping users adding imperceptible pixel-level changes (we call them "cloaks") to their own photos before publishing them online. When collected by a third-party "tracker" and used to train facial recognition models, these "cloaked" images produce functional models that consistently misidentify the user. We experimentally prove that Fawkes provides 95+% protection against user recognition regardless of how trackers train their models. Even when clean, uncloaked images are "leaked" to the tracker and used for training, Fawkes can still maintain a 80+% protection success rate. In fact, we perform real experiments against today's state-of-the-art facial recognition services and achieve 100% success. Finally, we show that Fawkes is robust against a variety of countermeasures that try to detect or disrupt cloaks.