Goto

Collaborating Authors

 ensuring fairness


Ensuring Fairness Beyond the Training Data

Neural Information Processing Systems

We initiate the study of fair classifiers that are robust to perturbations in the training distribution. Despite recent progress, the literature on fairness has largely ignored the design of fair and robust classifiers. In this work, we develop classifiers that are fair not only with respect to the training distribution but also for a class of distributions that are weighted perturbations of the training samples. We formulate a min-max objective function whose goal is to minimize a distributionally robust training loss, and at the same time, find a classifier that is fair with respect to a class of distributions. We first reduce this problem to finding a fair classifier that is robust with respect to the class of distributions. Based on an online learning algorithm, we develop an iterative algorithm that provably converges to such a fair and robust solution. Experiments on standard machine learning fairness datasets suggest that, compared to the state-of-the-art fair classifiers, our classifier retains fairness guarantees and test accuracy for a large class of perturbations on the test set. Furthermore, our experiments show that there is an inherent trade-off between fairness robustness and accuracy of such classifiers.


Review for NeurIPS paper: Ensuring Fairness Beyond the Training Data

Neural Information Processing Systems

The authors address an important area of study, aiming to ensure fairness beyond the training data by optimizing a worst case fairness loss across any weighted combination of the training set. They show that such fairness robustness comes at the cost of lower accuracy. Please add the material from the rebuttal and incorporate the reviewers' detailed comments.


Review for NeurIPS paper: Ensuring Fairness Beyond the Training Data

Neural Information Processing Systems

Additional Feedback: 1- Perhaps the reviewer missed it, but was is "f" in Eq (1) onwards? Is f(x,a) the same as h(x,a) -- which is defined in line 80? 2- The statement in the text immediately after Eq (1) seems incorrect. DP is supposed to be the difference in "acceptance rate" of the two groups a and a'. This would also make sense assuming that f() returns the predicted label. For instance, some discussion is needed into the drop in accuracy for the Robust classifier.


Ensuring Fairness Beyond the Training Data

Neural Information Processing Systems

We initiate the study of fair classifiers that are robust to perturbations in the training distribution. Despite recent progress, the literature on fairness has largely ignored the design of fair and robust classifiers. In this work, we develop classifiers that are fair not only with respect to the training distribution but also for a class of distributions that are weighted perturbations of the training samples. We formulate a min-max objective function whose goal is to minimize a distributionally robust training loss, and at the same time, find a classifier that is fair with respect to a class of distributions. We first reduce this problem to finding a fair classifier that is robust with respect to the class of distributions.


Intelligent Decisioning: Ensuring fairness in analytically-driven decision making

#artificialintelligence

The realities of today's digital transformation are pushing organizations across all industries to expand and accelerate their decision-making processes. Adaptability and precision remain essential, while decision complexity is accelerating. In response, organizations are changing the way they approach decisioning – leveraging technology in concert with human potential for obtaining the highest value. There are two approaches: the most familiar one is augmented decision making – where humans take analytically driven insights to make a decision, for example in call centers; and the second - automated decision making – where the machine makes the decisions, for high-volume transactional systems like credit origination, next best offers, and logistical routing. As these analytically driven approaches are embedded into decisioning operations, the impact on people and companies is far-reaching including the need to ensure fairness and eliminate bias in the process.


Ensuring Fairness Beyond the Training Data

#artificialintelligence

We initiate the study of fair classifiers that are robust to perturbations in the training distribution. Despite recent progress, the literature on fairness has largely ignored the design of fair and robust classifiers. In this work, we develop classifiers that are fair not only with respect to the training distribution, but also for a class of distributions that are weighted perturbations of the training samples. We formulate a min-max objective function whose goal is to minimize a distributionally robust training loss, and at the same time, find a classifier that is fair with respect to a class of distributions. We first reduce this problem to finding a fair classifier that is robust with respect to the class of distributions.