Goto

Collaborating Authors

 predict responsibly


Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer

Neural Information Processing Systems

In many machine learning applications, there are multiple decision-makers involved, both automated and human. The interaction between these agents often goes unaddressed in algorithmic development. In this work, we explore a simple version of this interaction with a two-stage framework containing an automated model and an external decision-maker. The model can choose to say PASS, and pass the decision downstream, as explored in rejection learning. We extend this concept by proposing learning to defer, which generalizes rejection learning by considering the effect of other agents in the decision-making process. We propose a learning algorithm which accounts for potential biases held by external decision-makers in a system. Experiments demonstrate that learning to defer can make systems not only more accurate but also less biased. Even when working with inconsistent or biased users, we show that deferring models still greatly improve the accuracy and/or fairness of the entire system.


Reviews: Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer

Neural Information Processing Systems

The paper studies the interaction between an automated model and a decision-maker. The authors refer to this new model of learning as learning-to-defer and provide connections between this model, rejection learning and mixture of experts model of learning. The paper proposes approaches to solve learning-to-defer and experimentally evaluate these approaches on fairness-sensitive datasets. Questions: (1) Is there any particular reason for choosing neural networks as the binary classifier versus other methods? The COMPAS dataset has been studied for fairness-accuracy trade-offs in many other previous works with different learning methods.


Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer

Madras, David, Pitassi, Toni, Zemel, Richard

Neural Information Processing Systems

In many machine learning applications, there are multiple decision-makers involved, both automated and human. The interaction between these agents often goes unaddressed in algorithmic development. In this work, we explore a simple version of this interaction with a two-stage framework containing an automated model and an external decision-maker. The model can choose to say PASS, and pass the decision downstream, as explored in rejection learning. We extend this concept by proposing "learning to defer", which generalizes rejection learning by considering the effect of other agents in the decision-making process.