Goto

Collaborating Authors

 Morgenstern, Jamie


A Convex Framework for Fair Regression

arXiv.org Machine Learning

We introduce a flexible family of fairness regularizers for (linear and logistic) regression problems. These regularizers all enjoy convexity, permitting fast optimization, and they span the rang from notions of group fairness to strong individual fairness. By varying the weight on the fairness regularizer, we can compute the efficient frontier of the accuracy-fairness trade-off on any given dataset, and we measure the severity of this trade-off via a numerical quantity we call the Price of Fairness (PoF). The centerpiece of our results is an extensive comparative study of the PoF across six different datasets in which fairness is a primary consideration.


Fairness in Learning: Classic and Contextual Bandits

arXiv.org Machine Learning

We introduce the study of fairness in multi-armed bandit problems. Our fairness definition can be interpreted as demanding that given a pool of applicants (say, for college admission or mortgages), a worse applicant is never favored over a better one, despite a learning algorithm's uncertainty over the true payoffs. We prove results of two types. First, in the important special case of the classic stochastic bandits problem (i.e., in which there are no contexts), we provide a provably fair algorithm based on "chained" confidence intervals, and provide a cumulative regret bound with a cubic dependence on the number of arms. We further show that any fair algorithm must have such a dependence. When combined with regret bounds for standard non-fair algorithms such as UCB, this proves a strong separation between fair and unfair learning, which extends to the general contextual case. In the general contextual case, we prove a tight connection between fairness and the KWIK (Knows What It Knows) learning model: a KWIK algorithm for a class of functions can be transformed into a provably fair contextual bandit algorithm, and conversely any fair contextual bandit algorithm can be transformed into a KWIK learning algorithm. This tight connection allows us to provide a provably fair algorithm for the linear contextual bandit problem with a polynomial dependence on the dimension, and to show (for a different class of functions) a worst-case exponential gap in regret between fair and non-fair learning algorithms


Impartial Peer Review

AAAI Conferences

Motivated by a radically new peer review system that the National Science Foundation recently experimented with, we study peer review systems in which proposals are reviewed by PIs who have submitted proposals themselves. An (m,k)-selection mechanism asks each PI to review m proposals, and uses these reviews to select (at most) k proposals. We are interested in impartial mechanisms, which guarantee that the ratings given by a PI to others' proposals do not affect the likelihood of the PI's own proposal being selected. We design an impartial mechanism that selects a k-subset of proposals that is nearly as highly rated as the one selected by the non-impartial (abstract version of) the NSF pilot mechanism, even when the latter mechanism has the "unfair" advantage of eliciting honest reviews.


How Bad Is Selfish Voting?

AAAI Conferences

It is well known that strategic behavior in elections is essentially unavoidable; we therefore ask: how bad can the rational outcome be? We answer this question via the notion of the price of anarchy, using the scores of alternatives as a proxy for their quality and bounding the ratio between the score of the optimal alternative and the score of the winning alternative in Nash equilibrium. Specifically, we are interested in Nash equilibria that are obtained via sequences of rational strategic moves. Focusing on three common voting rules — plurality, veto, and Borda — we provide very positive results for plurality and very negative results for Borda, and place veto in the middle of this spectrum.


On Maxsum Fair Cake Divisions

AAAI Conferences

We consider the problem of selecting fair divisions of a heterogeneous divisible good among a set of agents. Recent work (Cohler et al., AAAI 2011) focused on designing algorithms for computing maxsum—social welfare maximizing—allocations under the fairness notion of envy-freeness. Maxsum allocations can also be found under alternative notions such as equitability. In this paper, we examine the properties of these allocations. In particular, We provide conditions for when maxsum envy-free or equitable allocations are Pareto optimal and give examples where fairness with Pareto optimality is not possible. We also prove that maxsum envy-free allocations have weakly greater welfare than maxsum equitable allocations when agents have structured valuations, and we derive an approximate version of this inequality for general valuations.