Increasing Fairness in Predictions Using Bias Parity Score Based Loss Function Regularization

Jain, Bhanu, Huber, Manfred, Elmasri, Ramez

arXiv.org Artificial Intelligence 

The use of automated decision support and decision-making systems (ADM) (Hardt, Price, and Srebro 2016) in applications with direct impact on people's lives has increasingly become a fact of life, e,g. in criminal justice (Kleinberg, Contributions. We propose a technique that uses Bias Mullainathan, and Raghavan 2016; Jain et al. 2020b; Dressel Parity Score (BPS) measures to characterize fairness and develop and Farid 2018), medical diagnosis (Kleinberg, Mullainathan, a family of corresponding loss functions that are used and Raghavan 2016; Ahsen, Ayvaci, and Raghunathan as regularizers during training of Neural Networks to enhance 2019), insurance (Baudry and Robert 2019), credit fairness of the trained models. The goal here is to permit card fraud detection (Dal Pozzolo et al. 2014), electronic the system to actively pursue fair solutions during training health record data (Gianfrancesco et al. 2018), credit scoring while maintaining as high a performance on the task as (Huang, Chen, and Wang 2007) and many more diverse possible. We apply the approach in the context of several domains. This, in turn, has lead to an urgent need fairness measures and investigate multiple loss function formulations for study and scrutiny of the bias-magnifying effects of machine and regularization weights in order to study the learning and Artificial Intelligence algorithms and thus performance as well as potential drawbacks and deployment their potential to introduce and emphasize social inequalities considerations. In these experiments we show that, if used and systematic discrimination in our society. Appropriately, with appropriate settings, the technique measurably reduces much research is being done currently to mitigate bias race-based bias in recidivism prediction, and demonstrate in AI-based decision support systems (Ahsen, Ayvaci, and on the gender-based Adult Income dataset that the proposed Raghunathan 2019; Kleinberg, Mullainathan, and Raghavan method can outperform state-of-the art techniques aimed at 2016; Noriega-Campero et al. 2019; Feldman 2015; more targeted aspects of bias and fairness.