Achieving Fairness Without Harm via Selective Demographic Experts
Tan, Xuwei, Wang, Yuanlong, Pham, Thai-Hoang, Zhang, Ping, Zhang, Xueru
–arXiv.org Artificial Intelligence
As machine learning systems become increasingly integrated into human-centered domains such as healthcare, ensuring fairness while maintaining high predictive performance is critical. Existing bias mitigation techniques often impose a trade-off between fairness and accuracy, inadvertently degrading performance for certain demographic groups. In high-stakes domains like clinical diagnosis, such trade-offs are ethically and practically unacceptable. In this study, we propose a fairness-without-harm approach by learning distinct representations for different demographic groups and selectively applying demographic experts consisting of group-specific representations and personalized classifiers through a no-harm constrained selection. We evaluate our approach on three real-world medical datasets -- covering eye disease, skin cancer, and X-ray diagnosis -- as well as two face datasets. Extensive empirical results demonstrate the effectiveness of our approach in achieving fairness without harm.
arXiv.org Artificial Intelligence
Nov-11-2025
- Country:
- North America > United States
- Ohio (0.04)
- South America > Paraguay
- North America > United States
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (0.68)
- Therapeutic Area
- Dermatology (1.00)
- Oncology > Skin Cancer (0.34)
- Ophthalmology/Optometry (0.68)
- Health & Medicine
- Technology: