Fairness with Overlapping Groups
Yang, Forest, Cisse, Moustapha, Koyejo, Sanmi
Machine learning inform an increasingly large number of critical decisions in diverse settings. They assist medical diagnosis (McKinney et al., 2020), guide policing (Meijer and Wessels, 2019), and power credit scoring systems (Tsai and Wu, 2008). While they have demonstrated their value in many sectors, they are prone to unwanted biases, leading to discrimination against protected subgroups within the population. For example, recent studies have revealed biases in predictive policing and criminal sentencing systems (Meijer and Wessels, 2019; Chouldechova, 2017). The blossoming body of research in algorithmic fairness aims to study and address this issue by introducing novel algorithms guaranteeing a certain level of non-discrimination in the predictions.
Jun-24-2020
- Country:
- Europe (0.46)
- North America > United States (0.70)
- Genre:
- Research Report (1.00)
- Industry:
- Education > Educational Setting (0.46)
- Health & Medicine (0.66)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.68)
- Technology: