Uncertainty in Fairness Assessment: Maintaining Stable Conclusions Despite Fluctuations
Barrainkua, Ainhize, Gordaliza, Paula, Lozano, Jose A., Quadrianto, Novi
–arXiv.org Artificial Intelligence
With the current adoption of machine learning (ML) systems in social, economic, and industrial domains, concerns about the fairness of automated decisions have been added to the problem of ensuring the efficiency of algorithms in a stable and interpretative manner. Although both aspects are measured in terms of performance metrics, fairness entails the additional challenge of incorporating sensitive information in the data and new procedures need to be considered to control the stability of such outcomes. Recent ML trends are increasingly encouraging researchers to incorporate uncertainty into the evaluation of algorithm-based systems. In order to increase the transparency of algorithmic performance measures, typically for comparison purposes, some authors [3, 19] propose to treat these metrics as random variables whose posterior distributions are updated through Bayesian inference. In the fair learning setting, these kinds of considerations are also necessary, especially since fairness metrics have been proved unstable with respect to dataset composition. In particular, Ji et al. [17] or Friedler et al. [12] showed how certain fairness metrics strongly vary, respectively, in hold-out
arXiv.org Artificial Intelligence
Feb-2-2023