Radanovic, Goran
Information Gathering With Peers: Submodular Optimization With Peer-Prediction Constraints
Radanovic, Goran (Harvard University) | Singla, Adish (MPI-SWS) | Krause, Andreas (ETH Zurich) | Faltings, Boi (EPFL)
We study a problem of optimal information gathering from multiple data providers that need to be incentivized to provide accurate information. This problem arises in many real world applications that rely on crowdsourced data sets, but where the process of obtaining data is costly. A notable example of such a scenario is crowd sensing. To this end, we formulate the problem of optimal information gathering as maximization of a submodular function under a budget constraint, where the budget represents the total expected payment to data providers. Contrary to the existing approaches, we base our payments on incentives for accuracy and truthfulness, in particular, peer prediction methods that score each of the selected data providers against its best peer, while ensuring that the minimum expected payment is above a given threshold. We first show that the problem at hand is hard to approximate within a constant factor that is not dependent on the properties of the payment function. However, for given topological and analytical properties of the instance, we construct two greedy algorithms, respectively called PPCGreedy and PPCGreedyIter, and establish theoretical bounds on their performance w.r.t. the optimal solution. Finally, we evaluate our methods using a realistic crowd sensing testbed.
Partial Truthfulness in Minimal Peer Prediction Mechanisms With Limited Knowledge
Radanovic, Goran (Harvard University) | Faltings, Boi (EPFL)
We study minimal single-task peer prediction mechanisms that have limited knowledge about agents' beliefs. Without knowing what agents' beliefs are or eliciting additional information, it is not possible to design a truthful mechanism in a Bayesian-Nash sense. We go beyond truthfulness and explore equilibrium strategy profiles that are only partially truthful. Using the results from the multi-armed bandit literature, we give a characterization of how inefficient these equilibria are comparing to truthful reporting. We measure the inefficiency of such strategies by counting the number of dishonest reports that any minimal knowledge-bounded mechanism must have. We show that the order of this number is θ(log n), where n is the number of agents, and we provide a peer prediction mechanism that achieves this bound in expectation.
Multi-View Decision Processes: The Helper-AI Problem
Dimitrakakis, Christos, Parkes, David C., Radanovic, Goran, Tylkin, Paul
We consider a two-player sequential game in which agents have the same reward function but may disagree on the transition probabilities of an underlying Markovian model of the world. By committing to play a specific policy, the agent with the correct model can steer the behavior of the other agent, and seek to improve utility. We model this setting as a multi-view decision process, which we use to formally analyze the positive effect of steering policies. Furthermore, we develop an algorithm for computing the agents' achievable joint policy, and we experimentally show that it can lead to a large utility increase when the agents' models diverge.
Game Theory for Data Science: Eliciting Truthful Information
Faltings, Boi, Radanovic, Goran
Intelligent systems often depend on data provided by information agents, for example, sensor data or crowdsourced human computation. Providing accurate and relevant data requires costly effort that agents may not always be willing to provide. Thus, it becomes important to verify the correctness of data AND provide incentives so that agents providing high-quality data are rewarded while those that dont are discouraged by low rewards. ISBN 9781627057295, 151 pages.
Subjective fairness: Fairness is in the eye of the beholder
Dimitrakakis, Christos, Liu, Yang, Parkes, David, Radanovic, Goran
We analyze different notions of fairness in decision making when the underlying model is not known with certainty. We argue that recent notions of fairness in machine learning need to be modified to incorporate uncertainties about model parameters. We introduce the notion of {\em subjective fairness} as a suitable candidate for fair Bayesian decision making rules, relate this definition with existing ones, and experimentally demonstrate the inherent accuracy-fairness tradeoff under this definition.
Incentives for Subjective Evaluations with Private Beliefs
Radanovic, Goran (Ecole Polytechnique Fédérale de Lausanne (EPFL)) | Faltings, Boi (Ecole Polytechnique Fédérale de Lausanne (EPFL))
The modern web critically depends on aggregation of information from self-interested agents, for example opinion polls, product ratings, or crowdsourcing. We consider a setting where multiple objects (questions, products, tasks) are evaluated by a group of agents. We first construct a minimal peer prediction mechanism that elicits honest evaluations from a homogeneous population of agents with different private beliefs. Second, we show that it is impossible to strictly elicit honest evaluations from a heterogeneous group of agents with different private beliefs. Nevertheless, we provide a modified version of a divergence-based Bayesian Truth Serum that incentivizes agents to report consistently, making truthful reporting a weak equilibrium of the mechanism.
Incentives for Truthful Information Elicitation of Continuous Signals
Radanovic, Goran (Ecole Polytechnique Federale de Lausanne (EPFL)) | Faltings, Boi (Ecole Polytechnique Federale de Lausanne (EPFL))
We consider settings where a collective intelligence is formed by aggregating information contributed from many independent agents, such as product reviews, community sensing, or opinion polls. We propose a novel mechanism that elicits both private signals and beliefs. The mechanism extends the previous versions of the Bayesian Truth Serum (the original BTS, the RBTS, and the multi-valued BTS), by allowing small populations and non-binary private signals, while not requiring additional assumptions on the belief updating process. For priors that are sufficiently smooth, such as Gaussians, the mechanism allows signals to be continuous.
A Robust Bayesian Truth Serum for Non-Binary Signals
Radanovic, Goran (Ecole Polytechnique Fédérale de Lausanne (EPFL)) | Faltings, Boi (Ecole Polytechnique Fédérale de Lausanne (EPFL))
Several mechanisms have been proposed for incentivizing truthful reports of a private signals owned by rational agents, among them the peer prediction method and the Bayesian truth serum. The robust Bayesian truth serum (RBTS) for small populations and binary signals is particularly interesting since it does not require a common prior to be known to the mechanism. We further analyze the problem of the common prior not known to the mechanism and give several results regarding the restrictions that need to be placed in order to have an incentive-compatible mechanism. Moreover, we construct a Bayes-Nash incentive-compatible scheme called multi-valued RBTS that generalizes RBTS to operate on both small populations and non-binary signals.