Fairness Without Harm: An Influence-Guided Active Sampling Approach
–Neural Information Processing Systems
The pursuit of fairness in machine learning (ML), ensuring that the models do not exhibit biases toward protected demographic groups, typically results in a compromise scenario. This compromise can be explained by a Pareto frontier where given certain resources (e.g., data), reducing the fairness violations often comes at the cost of lowering the model accuracy. In this work, we aim to train models that mitigate group fairness disparity without causing harm to model accuracy. Intuitively, acquiring more data is a natural and promising approach to achieve this goal by reaching a better Pareto frontier of the fairness-accuracy tradeoff. The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
Neural Information Processing Systems
May-29-2025, 23:47:22 GMT
- Country:
- North America > United States (0.45)
- Genre:
- Research Report > Experimental Study (0.93)
- Industry:
- Banking & Finance (0.67)
- Information Technology > Security & Privacy (0.68)
- Technology: