Conitzer, Vincent
Indecision Modeling
McElfresh, Duncan C, Chan, Lok, Doyle, Kenzie, Sinnott-Armstrong, Walter, Conitzer, Vincent, Borg, Jana Schaich, Dickerson, John P
AI systems are often used to make or contribute to important decisions in a growing range of applications, including criminal justice, hiring, and medicine. Since these decisions impact human lives, it is important that the AI systems act in ways which align with human values. Techniques for preference modeling and social choice help researchers learn and aggregate peoples' preferences, which are used to guide AI behavior; thus, it is imperative that these learned preferences are accurate. These techniques often assume that people are willing to express strict preferences over alternatives; which is not true in practice. People are often indecisive, and especially so when their decision has moral implications. The philosophy and psychology literature shows that indecision is a measurable and nuanced behavior -- and that there are several different reasons people are indecisive. This complicates the task of both learning and aggregating preferences, since most of the relevant literature makes restrictive assumptions on the meaning of indecision. We begin to close this gap by formalizing several mathematical \emph{indecision} models based on theories from philosophy, psychology, and economics; these models can be used to describe (indecisive) agent decisions, both when they are allowed to express indecision and when they are not. We test these models using data collected from an online survey where participants choose how to (hypothetically) allocate organs to patients waiting for a transplant.
Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments
Jecmen, Steven, Zhang, Hanrui, Liu, Ryan, Shah, Nihar B., Conitzer, Vincent, Fang, Fei
We consider three important challenges in conference peer review: (i) reviewers maliciously attempting to get assigned to certain papers to provide positive reviews, possibly as part of quid-pro-quo arrangements with the authors; (ii) "torpedo reviewing," where reviewers deliberately attempt to get assigned to certain papers that they dislike in order to reject them; (iii) reviewer de-anonymization on release of the similarities and the reviewer-assignment code. On the conceptual front, we identify connections between these three problems and present a framework that brings all these challenges under a common umbrella. We then present a (randomized) algorithm for reviewer assignment that can optimally solve the reviewer-assignment problem under any given constraints on the probability of assignment for any reviewer-paper pair. We further consider the problem of restricting the joint probability that certain suspect pairs of reviewers are assigned to certain papers, and show that this problem is NP-hard for arbitrary constraints on these joint probabilities but efficiently solvable for a practical special case. Finally, we experimentally evaluate our algorithms on datasets from past conferences, where we observe that they can limit the chance that any malicious reviewer gets assigned to their desired paper to 50% while producing assignments with over 90% of the total optimal similarity. Our algorithms still achieve this similarity while also preventing reviewers with close associations from being assigned to the same paper.
Adapting a Kidney Exchange Algorithm to Align with Human Values
Freedman, Rachel, Borg, Jana Schaich, Sinnott-Armstrong, Walter, Dickerson, John P., Conitzer, Vincent
As AI is deployed increasingly broadly, AI researchers are confronted with the moral implications of their work. The pursuit of simple objectives, such as minimizing error rates, maximizing resource efficiency, or decreasing response times, often results in systems that have unintended consequences when they confront the real world, such as discriminating against certain groups of people [34]. It would be helpful for AI researchers and practitioners to have a general set of principles with which to approach these problems [45, 41, 24, 16, 33]. One may ask why any moral decisions should be left to computers at all. There are multiple possible reasons. One is that the decision needs to be made so quickly that calling in a human for the decision is not feasible, as would be the case for a self-driving car having to make a split-second decision about whom to hit [13]. Another reason could be that each individual decision by itself is too insignificant to bother a human, even though all the decisions combined may be highly significant morally--for example, if we were to consider the moral impact of each advertisement shown online. A third reason is that the moral decision is hard to decouple from a computational problem that apparently exceeds human capabilities. This is the case in many machine learning applications (e.g., should this person be released on bail?
Can Artificial Intelligence Do Everything That We Can?
Conitzer, Vincent
Can Artificial Intelligence Do Everything That We Can? A shorter version of this article appeared in The Wall Street Journal under the title Natural Intelligence Still Has Its Advantages, on August 28, 2018 [2]. Meanwhile, AI systems are starting to outperform people in domains ranging from board games to speech recognition. Is humanity on the way out? For those not working in AI, it can be difficult to interpret highly visible achievements in the field.
Disarmament Games With Resource
Deng, Yuan (Duke University) | Conitzer, Vincent (Duke University)
A paper by Deng and Conitzer in AAAI'17 introduces disarmament games, in which players alternatingly commit not to play certain pure strategies. However, in practice, disarmament usually does not consist in removing a strategy, but rather in removing a resource (and doing so rules out all the strategies in which that resource is used simultaneously). In this paper, we introduce a model of disarmament games in which resources, rather than strategies, are removed. We prove NP-completeness of several formulations of the problem of achieving desirable outcomes via disarmament. We then study the case where resources can be fractionally removed, and prove a result analogous to the folk theorem that all desirable outcomes can be achieved. We show that we can approximately achieve any desirable outcome in a polynomial number of rounds, though determining whether a given outcome can be obtained in a given number of rounds remains NP-complete.
Adapting a Kidney Exchange Algorithm to Align With Human Values
Freedman, Rachel (Duke University) | Borg, Jana Schaich (Duke University) | Sinnott-Armstrong, Walter (Duke University) | Dickerson, John P. (University of Maryland) | Conitzer, Vincent (Duke University)
The efficient allocation of limited resources is a classical problem in economics and computer science. In kidney exchanges, a central market maker allocates living kidney donors to patients in need of an organ. Patients and donors in kidney exchanges are prioritized using ad-hoc weights decided on by committee and then fed into an allocation algorithm that determines who get whatโand who does not. In this paper, we provide an end-to-end methodology for estimating weights of individual participant profiles in a kidney exchange. We first elicit from human subjects a list of patient attributes they consider acceptable for the purpose of prioritizing patients (e.g., medical characteristics, lifestyle choices, and so on). Then, we ask subjects comparison queries between patient profiles and estimate weights in a principled way from their responses. We show how to use these weights in kidney exchange market clearing algorithms. We then evaluate the impact of the weights in simulations and find that the precise numerical values of the weights we computed matter little, other than the ordering of profiles that they imply. However, compared to not prioritizing patients at all, there is a significant effect, with certain classes of patients being (de)prioritized based on the human-elicited value judgments.
Complexity of Scheduling Charging in the Smart Grid
de Weerdt, Mathijs, Albert, Michael, Conitzer, Vincent
In the smart grid, the intent is to use flexibility in demand, both to balance demand and supply as well as to resolve potential congestion. A first prominent example of such flexible demand is the charging of electric vehicles, which do not necessarily need to be charged as soon as they are plugged in. The problem of optimally scheduling the charging demand of electric vehicles within the constraints of the electricity infrastructure is called the charge scheduling problem. The models of the charging speed, horizon, and charging demand determine the computational complexity of the charge scheduling problem. For about 20 variants, we show, using a dynamic programming approach, that the problem is either in P or weakly NP-hard. We also show that about 10 variants of the problem are strongly NP-hard, presenting a potentially significant obstacle to their use in practical situations of scale.
Game-Theoretic Question Selection for Tests
Li, Yuqian, Conitzer, Vincent
Conventionally, the questions on a test are assumed to be kept secret from test takers until the test. However, for tests that are taken on a large scale, particularly asynchronously, this is very hard to achieve. For example, TOEFL iBT and driver's license test questions are easily found online. This also appears likely to become an issue for Massive Open Online Courses (MOOCs, as offered for example by Coursera, Udacity, and edX). Specifically, the test result may not reflect the true ability of a test taker if questions are leaked beforehand. In this paper, we take the loss of confidentiality as a fact. Even so, not all hope is lost as the test taker can memorize only a limited set of questions' answers, and the tester can randomize which questions to let appear on the test. We model this as a Stackelberg game, where the tester commits to a mixed strategy and the follower responds. Informally, the goal of the tester is to best reveal the true ability of a test taker, while the test taker tries to maximize the test result (pass probability or score). We provide an exponential-size linear program formulation that computes the optimal test strategy, prove several NP-hardness results on computing optimal test strategies in general, and give efficient algorithms for special cases (scored tests and single-question tests). Experiments are also provided for those proposed algorithms to show their scalability and the increase of the tester's utility relative to that of the uniform-at-random strategy. The increase is quite significant when questions have some correlation---for example, when a test taker who can solve a harder question can always solve easier questions.
Moral Decision Making Frameworks for Artificial Intelligence
Conitzer, Vincent (Duke University) | Sinnott-Armstrong, Walter (Duke University) | Borg, Jana Schaich (Duke University) | Deng, Yuan (Duke University) | Kramer, Max (Duke University)
The generality of decision and game theory has enabled domain-independent progress in AI research. For example, a better algorithm for finding good policies in (PO)MDPs can be instantly used in a variety of applications. But such a general theory is lacking when it comes to moral decision making. For AI applications with a moral component, are we then forced to build systems based on many ad-hoc rules? In this paper we discuss possible ways to avoid this conclusion.
Disarmament Games
Deng, Yuan (Duke University) | Conitzer, Vincent (Duke University)
Much recent work in the AI community concerns algorithms for computing optimal mixed strategies to commit to, as well as the deployment of such algorithms in real security applications. Another possibility is to commit not to play certain actions. If only one player makes such a commitment, then this is generally less powerful than completely committing to a single mixed strategy. However, if players can alternatingly commit not to play certain actions and thereby iteratively reduce their strategy spaces, then desirable outcomes can be obtained that would not have been possible with just a single player committing to a mixed strategy. We refer to such a setting as a disarmament game. In this paper, we study disarmament for two-player normal-form games. We show that deciding whether an outcome can be obtained with disarmament is NP-complete (even for a fixed number of rounds), if only pure strategies can be removed. On the other hand, for the case where mixed strategies can be removed, we provide a folk theorem that shows that all desirable utility profiles can be obtained, and give an efficient algorithm for (approximately) obtaining them.