Fairness Improvement with Multiple Protected Attributes: How Far Are We?
Chen, Zhenpeng, Zhang, Jie M., Sarro, Federica, Harman, Mark
–arXiv.org Artificial Intelligence
Existing research mostly improves the fairness of Machine Learning (ML) software regarding a single protected attribute at a time, but this is unrealistic given that many users have multiple protected attributes. This paper conducts an extensive study of fairness improvement regarding multiple protected attributes, covering 11 state-of-the-art fairness improvement methods. We analyze the effectiveness of these methods with different datasets, metrics, and ML models when considering multiple protected attributes. The results reveal that improving fairness for a single protected attribute can largely decrease fairness regarding unconsidered protected attributes. This decrease is observed in up to 88.3% of scenarios (57.5% on average). More surprisingly, we find little difference in accuracy loss when considering single and multiple protected attributes, indicating that accuracy can be maintained in the multiple-attribute paradigm. However, the effect on precision and recall when handling multiple protected attributes is about 5 times and 8 times that of a single attribute. This has important implications for future fairness research: reporting only accuracy as the ML performance metric, which is currently common in the literature, is inadequate.
arXiv.org Artificial Intelligence
Nov-3-2023
- Country:
- Europe > United Kingdom (0.28)
- North America > United States (0.46)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Information Technology (0.68)
- Law (1.00)
- Technology: