frss
Exploring Disparity-Accuracy Trade-offs in Face Recognition Systems: The Role of Datasets, Architectures, and Loss Functions
Jaiswal, Siddharth D, Basu, Sagnik, Sikdar, Sandipan, Mukherjee, Animesh
Automated Face Recognition Systems (FRSs), developed using deep learning models, are deployed worldwide for identity verification and facial attribute analysis. The performance of these models is determined by a complex interdependence among the model architecture, optimization/loss function and datasets. Although FRSs have surpassed human-level accuracy, they continue to be disparate against certain demographics. Due to the ubiquity of applications, it is extremely important to understand the impact of the three components -- model architecture, loss function and face image dataset on the accuracy-disparity trade-off to design better, unbiased platforms. In this work, we perform an in-depth analysis of three FRSs for the task of gender prediction, with various architectural modifications resulting in ten deep-learning models coupled with four loss functions and benchmark them on seven face datasets across 266 evaluation configurations. Our results show that all three components have an individual as well as a combined impact on both accuracy and disparity. We identify that datasets have an inherent property that causes them to perform similarly across models, independent of the choice of loss functions. Moreover, the choice of dataset determines the model's perceived bias -- the same model reports bias in opposite directions for three gender-balanced datasets of ``in-the-wild'' face images of popular individuals. Studying the facial embeddings shows that the models are unable to generalize a uniform definition of what constitutes a ``female face'' as opposed to a ``male face'', due to dataset diversity. We provide recommendations to model developers on using our study as a blueprint for model development and subsequent deployment.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
- Europe > Germany > Lower Saxony > Hanover (0.04)
- (2 more...)
Mask-up: Investigating Biases in Face Re-identification for Masked Faces
Jaiswal, Siddharth D, Verma, Ankit Kr., Mukherjee, Animesh
AI based Face Recognition Systems (FRSs) are now widely distributed and deployed as MLaaS solutions all over the world, moreso since the COVID-19 pandemic for tasks ranging from validating individuals' faces while buying SIM cards to surveillance of citizens. Extensive biases have been reported against marginalized groups in these systems and have led to highly discriminatory outcomes. The post-pandemic world has normalized wearing face masks but FRSs have not kept up with the changing times. As a result, these systems are susceptible to mask based face occlusion. In this study, we audit four commercial and nine open-source FRSs for the task of face re-identification between different varieties of masked and unmasked images across five benchmark datasets (total 14,722 images). These simulate a realistic validation/surveillance task as deployed in all major countries around the world. Three of the commercial and five of the open-source FRSs are highly inaccurate; they further perpetuate biases against non-White individuals, with the lowest accuracy being 0%. A survey for the same task with 85 human participants also results in a low accuracy of 40%. Thus a human-in-the-loop moderation in the pipeline does not alleviate the concerns, as has been frequently hypothesized in literature. Our large-scale study shows that developers, lawmakers and users of such services need to rethink the design principles behind FRSs, especially for the task of face re-identification, taking cognizance of observed biases.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > India > West Bengal > Kharagpur (0.04)
- North America > United States > New York (0.04)
- (4 more...)
Simultaneous Adversarial Attacks On Multiple Face Recognition System Components
Singh, Inderjeet, Kakizaki, Kazuya, Araki, Toshinori
In this work, we investigate the potential threat of adversarial examples to the security of face recognition systems. Although previous research has explored the adversarial risk to individual components of FRSs, our study presents an initial exploration of an adversary simultaneously fooling multiple components: the face detector and feature extractor in an FRS pipeline. We propose three multi-objective attacks on FRSs and demonstrate their effectiveness through a preliminary experimental analysis on a target system. Our attacks achieved up to 100% Attack Success Rates against both the face detector and feature extractor and were able to manipulate the face detection probability by up to 50% depending on the adversarial objective. This research identifies and examines novel attack vectors against FRSs and suggests possible ways to augment the robustness by leveraging the attack vector's knowledge during training of an FRS's components.
Computing Forward Reachable Sets for Nonlinear Adaptive Multirotor Controllers
In multirotor systems, guaranteeing safety while considering unknown disturbances is essential for robust trajectory planning. The Forward reachable set (FRS), the set of feasible states subject to bounded disturbances, can be utilized to identify robust and collision-free trajectories by checking the intersections with obstacles. However, in many cases, the FRS is not calculated in real time and is too conservative to be used in actual applications. In this paper, we address these issues by introducing a nonlinear disturbance observer (NDOB) and an adaptive controller to the multirotor system. We express the FRS of the closed-loop multirotor system with an adaptive controller in augmented state space using Hamilton-Jacobi reachability analysis. Then, we derive a closed-form expression that over-approximates the FRS as an ellipsoid, allowing for real-time computation. By compensating for disturbances with the adaptive controller, our over-approximated FRS can be smaller than other ellipsoidal over-approximations. Numerical examples validate the computational efficiency and the smaller scale of our proposed FRS.
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Asia > South Korea > Daejeon > Daejeon (0.04)
On Brightness Agnostic Adversarial Examples Against Face Recognition Systems
Singh, Inderjeet, Momiyama, Satoru, Kakizaki, Kazuya, Araki, Toshinori
This paper introduces a novel adversarial example generation method against face recognition systems (FRSs). An adversarial example (AX) is an image with deliberately crafted noise to cause incorrect predictions by a target system. The AXs generated from our method remain robust under real-world brightness changes. Our method performs non-linear brightness transformations while leveraging the concept of curriculum learning during the attack generation procedure. We demonstrate that our method outperforms conventional techniques from comprehensive experimental investigations in the digital and physical world. Furthermore, this method enables practical risk assessment of FRSs against brightness agnostic AXs.