To Pool or Not To Pool: Analyzing the Regularizing Effects of Group-Fair Training on Shared Models
Cousins, Cyrus, Kumar, I. Elizabeth, Venkatasubramanian, Suresh
–arXiv.org Artificial Intelligence
In fair machine learning, one source of performance disparities between groups is over-fitting to groups with relatively few training samples. We derive group-specific bounds on the generalization error of welfare-centric fair machine learning that benefit from the larger sample size of the majority group. We do this by considering group-specific Rademacher averages over a restricted hypothesis class, which contains the family of models likely to perform well with respect to a fair learning objective (e.g., a power-mean). Our simulations demonstrate these bounds improve over a naive method, as expected by theory, with particularly significant improvement for smaller group sizes.
arXiv.org Artificial Intelligence
Feb-28-2024
- Country:
- Europe (0.28)
- North America > United States
- Massachusetts (0.14)
- Genre:
- Research Report (0.82)
- Technology: