FairProof : Confidential and Certifiable Fairness for Neural Networks
Yadav, Chhavi, Chowdhury, Amrita Roy, Boneh, Dan, Chaudhuri, Kamalika
–arXiv.org Artificial Intelligence
Machine learning models are increasingly used in societal applications, yet legal and privacy concerns demand that they very often be kept confidential. Consequently, there is a growing distrust about the fairness properties of these models in the minds of consumers, who are often at the receiving end of model predictions. To this end, we propose FairProof - a system that uses Zero-Knowledge Proofs (a cryptographic primitive) to publicly verify the fairness of a model, while maintaining confidentiality. We also propose a fairness certification algorithm for fully-connected neural networks which is befitting to ZKPs and is used in this system. We implement FairProof in Gnark and demonstrate empirically that our system is practically feasible. Recent usage of ML models in high-stakes societal applications Khandani et al. (2010); Brennan et al. (2009); Datta et al. (2014) has raised serious concerns about their fairness (Angwin et al., 2016; Vigdor, November, 2019; Dastin, October 2018; Wallarchive & Schellmannarchive, June, 2021). As a result, there is growing distrust in the minds of a consumer at the receiving end of ML-based decisions Dwork & Minow (2022). In order to increase consumer trust, there is a need for developing technology that enables public verification of the fairness properties of these models. A major barrier to such verification is that legal and privacy concerns demand that models be kept confidential by organizations. The resulting lack of verifiability can lead to potential misbehavior, such as model swapping, wherein a malicious entity uses different models for different customers leading to unfair behavior. Therefore what is needed is a solution which allows for public verification of the fairness of a model and ensures that the same model is used for every prediction (model uniformity) while maintaining model confidentiality. The canonical approach to evaluating fairness is a statistics-based third-party audit Yadav et al. (2022); Yan & Zhang (2022); Pentyala et al. (2022).
arXiv.org Artificial Intelligence
Feb-19-2024
- Country:
- North America > United States > California (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Banking & Finance (1.00)
- Information Technology > Security & Privacy (1.00)
- Law (0.86)
- Technology: