Collaborating Authors

The zoo of Fairness metrics in Machine Learning Machine Learning

In recent years, the problem of addressing fairness in Machine Learning (ML) and automatic decision-making has attracted a lot of attention in the scientific communities dealing with Artificial Intelligence. A plethora of different definitions of fairness in ML have been proposed, that consider different notions of what is a "fair decision" in situations impacting individuals in the population. The precise differences, implications and "orthogonality" between these notions have not yet been fully analyzed in the literature. In this work, we try to make some order out of this zoo of definitions.

Fair Data Adaptation with Quantile Preservation Artificial Intelligence

Fairness of classification and regression has received much attention recently and various, partially non-compatible, criteria have been proposed. The fairness criteria can be enforced for a given classifier or, alternatively, the data can be adapated to ensure that every classifier trained on the data will adhere to desired fairness criteria. We present a practical data adaption method based on quantile preservation in causal structural equation models. The data adaptation is based on a presumed counterfactual model for the data. While the counterfactual model itself cannot be verified experimentally, we show that certain population notions of fairness are still guaranteed even if the counterfactual model is misspecified. The precise nature of the fulfilled non-causal fairness notion (such as demographic parity, separation or sufficiency) depends on the structure of the underlying causal model and the choice of resolving variables. We describe an implementation of the proposed data adaptation procedure based on Random Forests and demonstrate its practical use on simulated and real-world data.

A Tutorial on Fairness in Machine Learning


This post will be the first post on the series. The content is based on: the tutorial on fairness given by Solon Bacrocas and Moritz Hardt at NIPS2017, day1 and day4 from CS 294: Fairness in Machine Learning taught by Moritz Hardt at UC Berkeley and my own understanding of fairness literatures. I highly encourage interested readers to check out the linked NIPS tutorial and the course website. Fairness is becoming one of the most popular topics in machine learning in recent years. Publications explode in this field (see Fig1). The research community has invested a large amount of effort in this field.

Fairness with Dynamics Machine Learning

It has recently been shown that if feedback effects of decisions are ignored, then imposing fairness constraints such as demographic parity or equality of opportunity can actually exacerbate unfairness. We propose to address this challenge by modeling feedback effects as the dynamics of a Markov decision processes (MDPs). First, we define analogs of fairness properties that have been proposed for supervised learning. Second, we propose algorithms for learning fair decision-making policies for MDPs. We also explore extensions to reinforcement learning, where parts of the dynamical system are unknown and must be learned without violating fairness. Finally, we demonstrate the need to account for dynamical effects using simulations on a loan applicant MDP.

Approaching fairness in machine learning


As machine learning increasingly affects domains protected by anti-discrimination law, there is much interest in the problem of algorithmically measuring and ensuring fairness in machine learning. Across academia and industry, experts are finally embracing this important research direction that has long been marred by sensationalist clickbait overshadowing scientific efforts. This sequence of posts is a sober take on the subtleties and difficulties in engaging productively with the issue of fairness in machine learning. Prudence is necessary, since a poor regulatory proposal could easily do more harm than doing nothing at all. In this first post, I will focus on a sticky idea I call demographic parity that through its many variants has been proposed as a fairness criterion in dozens of papers.