social burden
Revisiting (Un)Fairness in Recourse by Minimizing Worst-Case Social Burden
Barrainkua, Ainhize, De Toni, Giovanni, Lozano, Jose Antonio, Quadrianto, Novi
Machine learning based predictions are increasingly used in sensitive decision-making applications that directly affect our lives. This has led to extensive research into ensuring the fairness of classifiers. Beyond just fair classification, emerging legislation now mandates that when a classifier delivers a negative decision, it must also offer actionable steps an individual can take to reverse that outcome. This concept is known as algorithmic recourse. Nevertheless, many researchers have expressed concerns about the fairness guarantees within the recourse process itself. In this work, we provide a holistic theoretical characterization of unfairness in algorithmic recourse, formally linking fairness guarantees in recourse and classification, and highlighting limitations of the standard equal cost paradigm. We then introduce a novel fairness framework based on social burden, along with a practical algorithm (MISOB), broadly applicable under real-world conditions. Empirical results on real-world datasets show that MISOB reduces the social burden across all groups without compromising overall classifier accuracy.
- North America > United States > Wisconsin (0.04)
- Europe > United Kingdom > England > East Sussex > Brighton (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- (4 more...)
- Law (1.00)
- Banking & Finance > Credit (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Data Science > Data Mining (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.47)
The Social Cost of Strategic Classification
Milli, Smitha, Miller, John, Dragan, Anca D., Hardt, Moritz
As machine learning increasingly supports consequential decision making, its vulnerability to manipulation and gaming is of growing concern. When individuals learn to adapt their behavior to the specifics of a statistical decision rule, its original predictive power will deteriorate. This widely observed empirical phenomenon, known as Campbell's Law or Goodhart's Law, is often summarized as: "Once a measure becomes a target, it ceases to be a good measure" [25]. Institutions using machine learning to make high-stakes decisions naturally wish to make their classifiers robust to strategic behavior. A growing line of work has sought algorithms that achieve higher utility for the institution in settings where we anticipate a strategic response from the the classified individuals [10, 5, 14]. Broadly speaking, the resulting solution concepts correspond to more conservative decision boundaries that increase robustness to some form of covariate shift.
- Education (0.68)
- Government > Regional Government > North America Government > United States Government (0.67)
- Banking & Finance > Credit (0.47)