Fairness for the People, by the People: Minority Collective Action
Ben-Dov, Omri, Samadi, Samira, Sanyal, Amartya, Ţifrea, Alexandru
–arXiv.org Artificial Intelligence
Machine learning models often preserve biases present in training data, leading to unfair treatment of certain minority groups. Despite an array of existing firm-side bias mitigation techniques, they typically incur utility costs and require organizational buy-in. Recognizing that many models rely on user-contributed data, end-users can induce fairness through the framework of Algorithmic Collective Action, where a coordinated minority group strategically relabels its own data to enhance fairness, without altering the firm's training process. We propose three practical, model-agnostic methods to approximate ideal relabeling and validate them on real-world datasets. Our findings show that a subgroup of the minority can substantially reduce unfairness with a small impact on the overall prediction error.
arXiv.org Artificial Intelligence
Nov-17-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- Denmark > Capital Region
- Copenhagen (0.04)
- France (0.04)
- Germany > Baden-Württemberg
- Tübingen Region > Tübingen (0.04)
- Switzerland > Zürich
- Zürich (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Denmark > Capital Region
- North America > United States
- California (0.04)
- Florida > Broward County (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Banking & Finance (0.46)
- Education > Educational Setting (0.46)
- Health & Medicine (0.67)
- Technology: