Fairness with Overlapping Groups

Yang, Forest, Cisse, Moustapha, Koyejo, Sanmi

arXiv.org Machine Learning 

Machine learning inform an increasingly large number of critical decisions in diverse settings. They assist medical diagnosis (McKinney et al., 2020), guide policing (Meijer and Wessels, 2019), and power credit scoring systems (Tsai and Wu, 2008). While they have demonstrated their value in many sectors, they are prone to unwanted biases, leading to discrimination against protected subgroups within the population. For example, recent studies have revealed biases in predictive policing and criminal sentencing systems (Meijer and Wessels, 2019; Chouldechova, 2017). The blossoming body of research in algorithmic fairness aims to study and address this issue by introducing novel algorithms guaranteeing a certain level of non-discrimination in the predictions.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found