The measure and mismeasure of fairness: a critical review of fair machine learning

#artificialintelligence

We've visited the topic of fairness in the context of machine learning several times on The Morning Paper (see e.g. I'm still picking up new insights every time I revisit the topic though, and today's paper choice is no exception. In 1911 Russell & Whitehead published Principia Mathematica, with the goal of providing a solid foundation for all of mathematics. In 1931 Gödel's Incompleteness Theorem shattered the dream, showing that for any consistent axiomatic system there will always be theorems that cannot be proven within the system. In case you're wondering where on earth I'm going with this… it's a very stretched analogy I've been playing with in my mind.


An Intersectional Definition of Fairness

arXiv.org Machine Learning

We introduce a measure of fairness for algorithms and data with regard to multiple protected attributes. Our proposed definition, differential fairness, is informed by the framework of intersectionality, which analyzes how interlocking systems of power and oppression affect individuals along overlapping dimensions including race, gender, sexual orientation, class, and disability. We show that our criterion behaves sensibly for any subset of the set of protected attributes, and we illustrate links to differential privacy. A case study on census data demonstrates the utility of our approach.


Bayesian Modeling of Intersectional Fairness: The Variance of Bias

arXiv.org Artificial Intelligence

Intersectionality is a framework that analyzes how interlocking systems of power and oppression affect individuals along overlapping dimensions including race, gender, sexual orientation, class, and disability. Intersectionality theory therefore implies it is important that fairness in artificial intelligence systems be protected with regard to multi-dimensional protected attributes. However, the measurement of fairness becomes statistically challenging in the multi-dimensional setting due to data sparsity, which increases rapidly in the number of dimensions, and in the values per dimension. We present a Bayesian probabilistic modeling approach for the reliable, data-efficient estimation of fairness with multi-dimensional protected attributes, which we apply to novel intersectional fairness metrics. Experimental results on census data and the COMPAS criminal justice recidivism dataset demonstrate the utility of our methodology, and show that Bayesian methods are valuable for the modeling and measurement of fairness in an intersectional context.


Fairness for Whom? Critically reframing fairness with Nash Welfare Product

arXiv.org Artificial Intelligence

Recent studies on disparate impact in machine learning applications have sparked a debate around the concept of fairness along with attempts to formalize its different criteria. Many of these approaches focus on reducing prediction errors while maximizing sole utility of the institution. This work seeks to reconceptualize and critically frame the existing discourse on fairness by underlining the implicit biases embedded in common understandings of fairness in the literature and how they contrast with its corresponding economic and legal definitions. This paper expands the concept of utility and fairness by bringing in concepts from established literature in welfare economics and game theory. We then translate these concepts for the algorithmic prediction domain by defining a formalization of Nash Welfare Product that seeks to expand utility by collapsing that of the institution using the prediction tool and the individual subject to the prediction into one function. We then apply a modulating function that makes the fairness and welfare trade-offs explicit based on designated policy goals and then apply it to a temporal model to take into account the effects of decisions beyond the scope of one-shot predictions. We apply this on a binary classification problem and present results of a multi-epoch simulation based on the UCI Adult Income dataset and a test case analysis of the ProPublica recidivism dataset that show that expanding the concept of utility results in a fairer distribution correcting for the embedded biases in the dataset without sacrificing the classifier accuracy.