Goto

Collaborating Authors

 Xia, Lirong


Practical Algorithms for Multi-Stage Voting Rules with Parallel Universes Tiebreaking

arXiv.org Artificial Intelligence

STV and ranked pairs (RP) are two well-studied voting rules for group decision-making. They proceed in multiple rounds, and are affected by how ties are broken in each round. However, the literature is surprisingly vague about how ties should be broken. We propose the first algorithms for computing the set of alternatives that are winners under some tiebreaking mechanism under STV and RP, which is also known as parallel-universes tiebreaking (PUT). Unfortunately, PUT-winners are NP-complete to compute under STV and RP, and standard search algorithms from AI do not apply. We propose multiple DFS-based algorithms along with pruning strategies, heuristics, sampling and machine learning to prioritize search direction to significantly improve the performance. We also propose novel ILP formulations for PUT-winners under STV and RP, respectively. Experiments on synthetic and real-world data show that our algorithms are overall faster than ILP.


A Mathematical Model For Optimal Decisions In A Representative Democracy

Neural Information Processing Systems

Direct democracy, where each voter casts one vote, fails when the average voter competence falls below 50%. This happens in noisy settings when voters have limited information. Representative democracy, where voters choose representatives to vote, can be an elixir in both these situations. We introduce a mathematical model for studying representative democracy, in particular understanding the parameters of a representative democracy that gives maximum decision making capability. Our main result states that under general and natural conditions, 1. for fixed voting cost, the optimal number of representatives is linear; 2. for polynomial cost, the optimal number of representatives is logarithmic.


A Mathematical Model For Optimal Decisions In A Representative Democracy

Neural Information Processing Systems

Direct democracy, where each voter casts one vote, fails when the average voter competence falls below 50%. This happens in noisy settings when voters have limited information. Representative democracy, where voters choose representatives to vote, can be an elixir in both these situations. We introduce a mathematical model for studying representative democracy, in particular understanding the parameters of a representative democracy that gives maximum decision making capability. Our main result states that under general and natural conditions, 1. for fixed voting cost, the optimal number of representatives is linear; 2. for polynomial cost, the optimal number of representatives is logarithmic.


Towards Non-Parametric Learning to Rank

arXiv.org Machine Learning

This paper studies a stylized, yet natural, learning-to-rank problem and points out the critical incorrectness of a widely used nearest neighbor algorithm. We consider a model with $n$ agents (users) $\{x_i\}_{i \in [n]}$ and $m$ alternatives (items) $\{y_j\}_{j \in [m]}$, each of which is associated with a latent feature vector. Agents rank items nondeterministically according to the Plackett-Luce model, where the higher the utility of an item to the agent, the more likely this item will be ranked high by the agent. Our goal is to find neighbors of an arbitrary agent or alternative in the latent space. We first show that the Kendall-tau distance based kNN produces incorrect results in our model. Next, we fix the problem by introducing a new algorithm with features constructed from "global information" of the data matrix. Our approach is in sharp contrast to most existing feature engineering methods. Finally, we design another new algorithm identifying similar alternatives. The construction of alternative features can be done using "local information," highlighting the algorithmic difference between finding similar agents and similar alternatives.


Composite Marginal Likelihood Methods for Random Utility Models

arXiv.org Machine Learning

We propose a novel and flexible rank-breaking-then-composite-marginal-likelihood (RBCML) framework for learning random utility models (RUMs), which include the Plackett-Luce model. We characterize conditions for the objective function of RBCML to be strictly log-concave by proving that strict log-concavity is preserved under convolution and marginalization. We characterize necessary and sufficient conditions for RBCML to satisfy consistency and asymptotic normality. Experiments on synthetic data show that RBCML for Gaussian RUMs achieves better statistical efficiency and computational efficiency than the state-of-the-art algorithm and our RBCML for the Plackett-Luce model provides flexible tradeoffs between running time and statistical efficiency.


Practical Algorithms for STV and Ranked Pairs with Parallel Universes Tiebreaking

arXiv.org Artificial Intelligence

STV and ranked pairs (RP) are two well-studied voting rules for group decision-making. They proceed in multiple rounds, and are affected by how ties are broken in each round. However, the literature is surprisingly vague about how ties should be broken. We propose the first algorithms for computing the set of alternatives that are winners under some tiebreaking mechanism under STV and RP, which is also known as parallel-universes tiebreaking (PUT). Unfortunately, PUT-winners are NP-complete to compute under STV and RP, and standard search algorithms from AI do not apply. We propose multiple DFS-based algorithms along with pruning strategies and heuristics to prioritize search direction to significantly improve the performance using machine learning. We also propose novel ILP formulations for PUT-winners under STV and RP, respectively. Experiments on synthetic and real-world data show that our algorithms are overall significantly faster than ILP, while there are a few cases where ILP is significantly faster for RP.


A Cost-Effective Framework for Preference Elicitation and Aggregation

arXiv.org Artificial Intelligence

We propose a cost-effective framework for preference elicitation and aggregation under Plackett-Luce model with features. Given a budget, our framework iteratively computes the most cost-effective elicitation questions in order to help the agents make better group decisions. We illustrate the viability of the framework with an experiment on Amazon Mechanical Turk, which estimates the cost of answering different types of elicitation questions. We compare the prediction accuracy of our framework when adopting various information criteria that evaluate the expected information gain from a question. Our experiments show carefully designed information criteria are much more efficient than randomly asking questions given budget constraint.


Vote Until Two of You Agree: Mechanisms with Small Distortion and Sample Complexity

AAAI Conferences

To design social choice mechanisms with desirable utility properties, normative properties, and low sample complexity, we propose a new randomized mechanism called 2-Agree. This mechanism asks random voters for their top alternatives until at least two voters agree, at which point it selects that alternative as the winner. We prove that, despite its simplicity and low sample complexity, 2-Agree achieves almost optimal distortion on a metric space when the number of alternatives is not large, and satisfies anonymity, neutrality, ex-post Pareto efficiency, very strong SD-participation, and is approximately truthful. We further show that 2-Agree works well for larger number of alternatives with decisive agents.


Mechanism Design for Multi-Type Housing Markets

AAAI Conferences

We study multi-type housing markets, where there are p ≥ 2 types of items, each agent is initially endowed one item of each type, and the goal is to design mechanisms without monetary transfer to (re)allocate items to the agents based on their preferences over bundles of items, such that each agent gets one item of each type. In sharp contrast to classical housing markets, previous studies in multi-type housing markets have been hindered by the lack of natural solution concepts, because the strict core might be empty. We break the barrier in the literature by leveraging AI techniques and making natural assumptions on agents’ preferences. We show that when agents’ preferences are lexicographic, even with different importance orders, the classical top-trading-cycles mechanism can be extended while preserving most of its nice properties. We also investigate computational complexity of checking whether an allocation is in the strict core and checking whether the strict core is empty. Our results convey an encouragingly positive message: it is possible to design good mechanisms for multi-type housing markets under natural assumptions on preferences.


Quantitative Extensions of the Condorcet Jury Theorem with Strategic Agents

AAAI Conferences

The Condorcet Jury Theorem justifies the wisdom of crowds and lays the foundations of the ideology of the democratic regime. However, the Jury Theorem and most of its extensions focus on two alternatives and none of them quantitatively evaluate the effect of agents’ strategic behavior on the mechanism’s truth-revealing power. We initiate a research agenda of quantitatively extend- ing the Jury Theorem with strategic agents by characterizing the price of anarchy (PoA) and the price of stability (PoS) of the common interest Bayesian voting games for three classes of mechanisms: plurality, MAPs, and the mechanisms that satisfy anonymity, neutrality, and strategy-proofness (w.r.t. a set of natural probabil- ity models). We show that while plurality and MAPs have better best-case truth-revealing power (lower PoS), the third class of mechanisms are more robust against agents’ strategic behavior (lower PoA).