Beyond Individual and Group Fairness
Awasthi, Pranjal, Cortes, Corinna, Mansour, Yishay, Mohri, Mehryar
Learning algorithms trained on large amounts of data are increasingly adopted in applications with significant individual and social consequences such as selecting loan applicants, filtering resumes of job applicants, estimating the likelihood for a defendant to commit future crimes, or deciding where to deploy police officers. Analyzing the risk of bias in these systems is therefore crucial. In fact, that is also critical for seemingly less socially consequential applications such as ads placement, recommendation systems, speech recognition, and many other common applications of machine learning. Such biases can appear due to the way the training data has been collected, due to an improper choice of the loss function optimized, or as a result of some other algorithmic choices.
Aug-21-2020
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (0.82)
- Industry:
- Law (0.48)
- Technology: