malfare
AnAxiomaticTheoryofProvably-Fair Welfare-CentricMachineLearning
Wedefineacomplementarymetric,termedmalfare, measuring overallsocietal harm, with axiomatic justification via the standard axioms of cardinal welfare, and cast fair ML asmalfare minimizationover the risk values(expected losses) ofeachgroup. Surprisingly,theaxioms ofcardinal welfare (malfare) dictate that this is not equivalent to simply defining utility as negativelossandmaximizing welfare.
AnAxiomaticTheoryofProvably-Fair Welfare-CentricMachineLearning
Wedefineacomplementarymetric,termedmalfare, measuring overallsocietal harm, with axiomatic justification via the standard axioms of cardinal welfare, and cast fair ML asmalfare minimizationover the risk values(expected losses) ofeachgroup. Surprisingly,theaxioms ofcardinal welfare (malfare) dictate that this is not equivalent to simply defining utility as negativelossandmaximizing welfare.
An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning
We address an inherent difficulty in welfare-theoretic fair machine learning (ML), by proposing an equivalently-axiomatically justified alternative setting, and studying the resulting computational and statistical learning questions. Welfare metrics quantify overall wellbeing across a population of groups, and welfare-based objectives and constraints have recently been proposed to incentivize fair ML methods to satisfy their diverse needs. However, many ML problems are cast as loss minimization tasks, rather than utility maximization, and thus require nontrivial modeling to construct utility functions. We define a complementary metric, termed malfare, measuring overall societal harm, with axiomatic justification via the standard axioms of cardinal welfare, and cast fair ML as malfare minimization over the risk values (expected losses) of each group. Surprisingly, the axioms of cardinal welfare (malfare) dictate that this is not equivalent to simply defining utility as negative loss and maximizing welfare. Building upon these concepts, we define fair-PAC learning, where a fair-PAC learner is an algorithm that learns an ε-δ malfare-optimal model with bounded sample complexity, for any data distribution and (axiomatically justified) malfare concept. Finally, we show conditions under which many standard PAC-learners may be converted to fair-PAC learners, which places fair-PAC learning on firm theoretical ground, as it yields statistical -- and in some cases computational -- efficiency guarantees for many well-studied ML models. Fair-PAC learning is also practically relevant, as it democratizes fair ML by providing concrete training algorithms with rigorous generalization guarantees.
An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning
We address an inherent difficulty in welfare-theoretic fair machine learning (ML), by proposing an equivalently-axiomatically justified alternative setting, and studying the resulting computational and statistical learning questions. Welfare metrics quantify overall wellbeing across a population of groups, and welfare-based objectives and constraints have recently been proposed to incentivize fair ML methods to satisfy their diverse needs. However, many ML problems are cast as loss minimization tasks, rather than utility maximization, and thus require nontrivial modeling to construct utility functions. We define a complementary metric, termed malfare, measuring overall societal harm, with axiomatic justification via the standard axioms of cardinal welfare, and cast fair ML as malfare minimization over the risk values (expected losses) of each group. Surprisingly, the axioms of cardinal welfare (malfare) dictate that this is not equivalent to simply defining utility as negative loss and maximizing welfare.
To Pool or Not To Pool: Analyzing the Regularizing Effects of Group-Fair Training on Shared Models
Cousins, Cyrus, Kumar, I. Elizabeth, Venkatasubramanian, Suresh
In fair machine learning, one source of performance disparities between groups is over-fitting to groups with relatively few training samples. We derive group-specific bounds on the generalization error of welfare-centric fair machine learning that benefit from the larger sample size of the majority group. We do this by considering group-specific Rademacher averages over a restricted hypothesis class, which contains the family of models likely to perform well with respect to a fair learning objective (e.g., a power-mean). Our simulations demonstrate these bounds improve over a naive method, as expected by theory, with particularly significant improvement for smaller group sizes.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Spain > Valencian Community > Valencia Province > Valencia (0.04)