BaBE: Enhancing Fairness via Estimation of Latent Explaining Variables
Binkyte, Ruta, Gorla, Daniele, Palamidessi, Catuscia
–arXiv.org Artificial Intelligence
We consider the problem of unfair discrimination between two groups and propose a pre-processing method to achieve fairness. Corrective methods like statistical parity usually lead to bad accuracy and do not really achieve fairness in situations where there is a correlation between the sensitive attribute S and the legitimate attribute E (explanatory variable) that should determine the decision. To overcome these drawbacks, other notions of fairness have been proposed, in particular, conditional statistical parity and equal opportunity. However, E is often not directly observable in the data, i.e., it is a latent variable. We may observe some other variable Z representing E, but the problem is that Z may also be affected by S, hence Z itself can be biased. To deal with this problem, we propose BaBE (Bayesian Bias Elimination), an approach based on a combination of Bayes inference and the Expectation-Maximization method, to estimate the most likely value of E for a given Z for each group. The decision can then be based directly on the estimated E. We show, by experiments on synthetic and real data sets, that our approach provides a good level of fairness as well as high accuracy.
arXiv.org Artificial Intelligence
Jul-6-2023
- Country:
- Europe (0.67)
- North America > United States (0.28)
- Genre:
- Research Report (0.81)
- Industry:
- Education (0.67)
- Health & Medicine > Consumer Health (0.46)