Predict Responsibly: Increasing Fairness by Learning To Defer
Madras, David, Pitassi, Toniann, Zemel, Richard
In many high-stakes ML applications, there are multiple decision-makers involved, both automated and human. The interaction between these agents often goes unaddressed in algorithmic development. In this work, we explore a simple version of this interaction with a two-stage framework containing an automated model and an external decision-maker. The model can choose to say IDK, and pass the decision downstream, as explored in rejection learning. We extend this concept by proposing learning to defer, which generalizes the rejection learning framework by considering the effect of the other agents in the decision-making process. We propose a learning algorithm which accounts for potential biases held by external decision-makers in a system. Experiments on real-world datasets demonstrate that learning to defer can make a system not only more accurate but also less biased. Even when operated by highly biased users, we show that deferring models can still greatly improve the fairness of the entire system.
Feb-20-2018
- Country:
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > Colorado (0.14)
- Canada > Ontario
- North America
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Health & Medicine (1.00)
- Technology: