fairest
Pushing the limits of fairness impossibility: Who's the fairest of them all?
The impossibility theorem of fairness is a foundational result in the algorithmic fairness literature. It states that outside of special cases, one cannot exactly and simultaneously satisfy all three common and intuitive definitions of fairness - demographic parity, equalized odds, and predictive rate parity. This result has driven most works to focus on solutions for one or two of the metrics. Rather than follow suit, in this paper we present a framework that pushes the limits of the impossibility theorem in order to satisfy all three metrics to the best extent possible. We develop an integer-programming based approach that can yield a certifiably optimal post-processing method for simultaneously satisfying multiple fairness criteria under small violations. We show experiments demonstrating that our post-processor can improve fairness across the different definitions simultaneously with minimal model performance reduction. We also discuss applications of our framework for model selection and fairness explainability, thereby attempting to answer the question: Who's the fairest of them all?
Reviews: Near Neighbor: Who is the Fairest of Them All?
The writing up to page 4 is very wordy and I believe could be more succinctly written. First of all, I am not entirely sure why the problem of sampling from sub collection of sets need to be repeated twice in the paper. In addition, the paper should clearly state that the new sampling strategy can be embedded in the existing LSH method to achieve unbiased query results. Nevertheless, the algorithm does seem interesting - the key bottleneck of estimating the degree of a particular point (basically the number of distinct buckets that contain it) is identified and there are interesting solutions based on existing work in this paper. Section 3 - line 161 states the union of G but G is a set of sets.
AI, AI on the wall -- Who's the Fairest of them all?
"A world perfectly fair in some dimensions would be horribly unfair in others." "Fairness" in Artificial Intelligence (AI) applications -- both as a concept and a practice -- is the focus of many organisations as they deploy new technologies for greater effectiveness and efficiencies. That machines are faster at processing large amounts of information and the notion that they are'more objective' than humans, appear to make them an obvious choice for progressivity and seemingly impartial actors in'fairer' decision-making. Yet, algorithmic based decisions have not come without their share of controversies -- Australia's recent'robo-debt' government intervention which wrongly pursued thousands of welfare recipients; the UK's'A-Levels fiasco' of downgrading graduating grades based on historical data, its controversial visa application streaming tool; and concerns about Clearview AI's facial recognition software for policing are raising new questions on the role of these technologies in society. Risk assessments are part of the fabric of modern society, but what we are dealing with here is not just'scaling up' human capacity for decision-making without the unwanted human biases and errors -- we are also extolling the'virtues of objectivity' under the guise of'fairness' (which is inherently subjective!) and failing to recognise the many inter-relationships that are being unraveled through the use of these algorithms in our daily lives.