Goto

Collaborating Authors

 Procaccia, Ariel D.


Collaborative PAC Learning

Neural Information Processing Systems

We introduce a collaborative PAC learning model, in which k players attempt to learn the same underlying concept. We ask how much more information is required to learn an accurate classifier for all players simultaneously. We refer to the ratio between the sample complexity of collaborative PAC learning and its non-collaborative (single-player) counterpart as the overhead. We design learning algorithms with O(ln(k)) and O(ln^2(k)) overhead in the personalized and centralized variants our model. This gives an exponential improvement upon the naive algorithm that does not share information among players. We complement our upper bounds with an Omega(ln(k)) overhead lower bound, showing that our results are tight up to a logarithmic factor.


Preference Elicitation For Participatory Budgeting

AAAI Conferences

Participatory budgeting enables the allocation of public funds by collecting and aggregating individual preferences; it has already had a sizable real-world impact. But making the most of this new paradigm requires a rethinking of some of the basics of computational social choice, including the very way in which individuals express their preferences. We analytically compare four preference elicitation methods -- knapsack votes, rankings by value or value for money, and threshold approval votes -- through the lens of implicit utilitarian voting, and find that threshold approval votes are qualitatively superior. This conclusion is supported by experiments using data from real participatory budgeting elections.


Small Representations of Big Kidney Exchange Graphs

AAAI Conferences

Kidney exchanges are organized markets where patients swap willing but incompatible donors. In the last decade, kidney exchanges grew from small and regional to large and national — and soon, international. This growth results in more lives saved, but exacerbates the empirical hardness of the NP-complete problem of optimally matching patients to donors. State-of-the-art matching engines use integer programming techniques to clear fielded kidney exchanges, but these methods must be tailored to specific models and objective functions, and may fail to scale to larger exchanges. In this paper, we observe that if the kidney exchange compatibility graph can be encoded by a constant number of patient and donor attributes, the clearing problem is solvable in polynomial time. We give necessary and sufficient conditions for losslessly shrinking the representation of an arbitrary compatibility graph. Then, using real compatibility graphs from the UNOS US-wide kidney exchange, we show how many attributes are needed to encode real graphs. The experiments show that, indeed, small numbers of attributes suffice.


Small Representations of Big Kidney Exchange Graphs

arXiv.org Artificial Intelligence

Kidney exchanges are organized markets where patients swap willing but incompatible donors. In the last decade, kidney exchanges grew from small and regional to large and national---and soon, international. This growth results in more lives saved, but exacerbates the empirical hardness of the $\mathcal{NP}$-complete problem of optimally matching patients to donors. State-of-the-art matching engines use integer programming techniques to clear fielded kidney exchanges, but these methods must be tailored to specific models and objective functions, and may fail to scale to larger exchanges. In this paper, we observe that if the kidney exchange compatibility graph can be encoded by a constant number of patient and donor attributes, the clearing problem is solvable in polynomial time. We give necessary and sufficient conditions for losslessly shrinking the representation of an arbitrary compatibility graph. Then, using real compatibility graphs from the UNOS nationwide kidney exchange, we show how many attributes are needed to encode real compatibility graphs. The experiments show that, indeed, small numbers of attributes suffice.


An Algorithmic Framework for Strategic Fair Division

AAAI Conferences

A large body of literature deals with the so-called cake cutting So how would strategic agents behave when faced with problem -- a misleadingly childish metaphor for the the cut and choose protocol? A standard way of answering challenging and important task of fairly dividing a heterogeneous this question employs the notion of Nash equilibrium: each divisible good among multiple agents (see the recent agent would use a strategy that is a best response to the other survey by Procaccia (2013) and the books by Brams agent's strategy. To set up a Nash equilibrium, suppose that and Taylor (1996) and Robertson and Webb (1998)). In particular, the first agent cuts two pieces that the second agent values there is a significant amount of AI work on cake cutting equally; the second agent selects its more preferred piece, (Procaccia 2009; Caragiannis, Lai, and Procaccia 2011; and the one less preferred by the first agent in case of a tie. Brams et al. 2012; Bei et al. 2012; Aumann, Dombb, Clearly, the second agent cannot gain from deviating, as it is and Hassidim 2013; Kurokawa, Lai, and Procaccia 2013; selecting a piece that is at least as preferred as the other. As Brânzei, Procaccia, and Zhang 2013; Brânzei and Miltersen for the first agent, if it makes its preferred piece even bigger, 2013; Chen et al. 2013; Balkanski et al. 2014; Brânzei the second agent would choose that piece, making the and Miltersen 2015; Segal-Halevi, Hassidim, and Aumann first agent worse off. Interestingly enough, in this equilibrium 2015), which is closely intertwined with emerging realworld the tables are turned; now it is the second agent who applications of fair division more broadly (Goldman is getting exactly half of its value for the whole cake, while and Procaccia 2014; Kurokawa, Procaccia, and Shah 2015).


Optimal Aggregation of Uncertain Preferences

AAAI Conferences

A paradigmatic problem in social choice theory deals with the aggregation of subjective preferences of individuals --- represented as rankings of alternatives --- into a social ranking. We are interested in settings where individuals are uncertain about their own preferences, and represent their uncertainty as distributions over rankings. Under the classic objective of minimizing the (expected) sum of Kendall tau distances between the input rankings and the output ranking, we establish that preference elicitation is surprisingly straightforward and near-optimal solutions can be obtained in polynomial time. We show, both in theory and using real data, that ignoring uncertainty altogether can lead to suboptimal outcomes.


When Can the Maximin Share Guarantee Be Guaranteed?

AAAI Conferences

The fairness notion of maximin share (MMS) guarantee underlies a deployed algorithm for allocating indivisible goods under additive valuations. Our goal is to understand when we can expect to be able to give each player his MMS guarantee. Previous work has shown that such an MMS allocation may not exist, but the counterexample requires a number of goods that is exponential in the number of players; we give a new construction that uses only a linear number of goods. On the positive side, we formalize the intuition that these counterexamples are very delicate by designing an algorithm that provably finds an MMS allocation with high probability when valuations are drawn at random.


Is Approval Voting Optimal Given Approval Votes?

Neural Information Processing Systems

Some crowdsourcing platforms ask workers to express their opinions by approving a set of k good alternatives. It seems that the only reasonable way to aggregate these k-approval votes is the approval voting rule, which simply counts the number of times each alternative was approved. We challenge this assertion by proposing a probabilistic framework of noisy voting, and asking whether approval voting yields an alternative that is most likely to be the best alternative, given k-approval votes. While the answer is generally positive, our theoretical and empirical results call attention to situations where approval voting is suboptimal.


Influence in Classification via Cooperative Game Theory

AAAI Conferences

A dataset has been classified by some unknown classifier into two types of points. What were the most important factors in determining the classification outcome? In this work, we employ an axiomatic approach in order to uniquely characterize an influence measure: a function that, given a set of classified points, outputs a value for each feature corresponding to its influence in determining the classification outcome. We show that our influence measure takes on an intuitive form when the unknown classifier is linear. Finally, we employ our influence measure in order to analyze the effects of user profiling on Google’s online display advertising.


Impartial Peer Review

AAAI Conferences

Motivated by a radically new peer review system that the National Science Foundation recently experimented with, we study peer review systems in which proposals are reviewed by PIs who have submitted proposals themselves. An (m,k)-selection mechanism asks each PI to review m proposals, and uses these reviews to select (at most) k proposals. We are interested in impartial mechanisms, which guarantee that the ratings given by a PI to others' proposals do not affect the likelihood of the PI's own proposal being selected. We design an impartial mechanism that selects a k-subset of proposals that is nearly as highly rated as the one selected by the non-impartial (abstract version of) the NSF pilot mechanism, even when the latter mechanism has the "unfair" advantage of eliciting honest reviews.