Wasserstein projection distance for fairness testing of regression models
Li, Wanxin, Park, Yongjin P., Duc, Khanh Dao
–arXiv.org Artificial Intelligence
Fairness in machine learning is a critical concern, yet most research has focused on classification tasks, leaving regression models underexplored. This paper introduces a Wasserstein projection-based framework for fairness testing in regression models, focusing on expectation-based criteria. We propose a hypothesis-testing approach and an optimal data perturbation method to improve fairness while balancing accuracy. Theoretical results include a detailed categorization of fairness criteria for regression, a dual reformulation of the Wasserstein projection test statistic, and the derivation of asymptotic bounds and limiting distributions. Experiments on synthetic and real-world datasets demonstrate that the proposed method offers higher specificity compared to permutation-based tests, and effectively detects and mitigates biases in real applications such as student performance and housing price prediction.
arXiv.org Artificial Intelligence
Oct-7-2025
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > Canada
- British Columbia (0.05)
- Europe > United Kingdom
- Genre:
- Research Report (1.00)
- Industry:
- Banking & Finance > Real Estate (0.34)
- Education (0.66)
- Law (0.46)
- Technology: