Goto

Collaborating Authors

 Chen, Yiling


The Fisher Market Game: Equilibrium and Welfare

AAAI Conferences

The Fisher market model is one of the most fundamental resource allocation models in economics. In a Fisher market, the prices and allocations of goods are determined according to the preferences and budgets of buyers to clear the market. In a Fisher market game, however, buyers are strategic and report their preferences over goods; the market-clearing prices and allocations are then determined based on their reported preferences rather than their real preferences. We show that the Fisher market game always has a pure Nash equilibrium, for buyers with linear, Leontief, and Cobb-Douglas utility functions, which are three representative classes of utility functions in the important Constant Elasticity of Substitution (CES) family. Furthermore, to quantify the social efficiency, we prove Price of Anarchy bounds for the game when the utility functions of buyers fall into these three classes respectively.



Reports of the AAAI 2012 Conference Workshops

AI Magazine

The AAAI-12 Workshop program was held Sunday and Monday, July 22–23, 2012 at the Sheraton Centre Toronto Hotel in Toronto, Ontario, Canada. The AAAI-12 workshop program included 9 workshops covering a wide range of topics in artificial intelligence. The titles of the workshops were Activity Context Representation: Techniques and Languages, AI for Data Center Management and Cloud Computing, Cognitive Robotics, Grounding Language for Physical Systems, Human Computation, Intelligent Techniques for Web Personalization and Recommendation, Multiagent Pathfinding, Neural-Symbolic Learning and Reasoning, Problem Solving Using Classical Planners, Semantic Cities. This article presents short summaries of those events.


Adaptive Polling for Information Aggregation

AAAI Conferences

The flourishing of online labor markets such as Amazon Mechanical Turk (MTurk) makes it easy to recruit many workers for solving small tasks. We study whether information elicitation and aggregation over a combinatorial space can be achieved by integrating small pieces of potentially imprecise information, gathered from a large number of workers through simple, one-shot interactions in an online labor market. We consider the setting of predicting the ranking of $n$ competing candidates, each having a hidden underlying strength parameter. At each step, our method estimates the strength parameters from the collected pairwise comparison data and adaptively chooses another pairwise comparison question for the next recruited worker. Through an MTurk experiment, we show that the adaptive method effectively elicits and aggregates information, outperforming a naive method using a random pairwise comparison question at each step.


TurkServer: Enabling Synchronous and Longitudinal Online Experiments

AAAI Conferences

With the proliferation of online labor markets and other social computing platforms, online experiments have become a low-cost and scalable way to empirically test hypotheses and mechanisms in both human computation and social science. Yet, despite the potential in designing more powerful and expressive online experiments using multiple subjects, researchers still face many technical and logistical difficulties. We see synchronous and longitudinal experiments involving real-time interaction between participants as a dual-use paradigm for both human computation and social science, and present TurkServer, a platform that facilitates these types of experiments on Amazon Mechanical Turk. Our work has the potential to make more fruitful online experiments accessible to researchers in many different fields.


Market Manipulation with Outside Incentives

AAAI Conferences

Much evidence has shown that prediction markets, when used in isolation, can effectively aggregate dispersed information about uncertain future events and produce remarkably accurate forecasts. However, if the market prediction will be used for decision making, a strategic participant with a vested interest in the decision outcome may want to manipulate the market prediction in order to influence the resulting decision. The presence of such incentives outside of the market would seem to damage information aggregation because of the potential distrust among market participants. While this is true under some conditions, we find that, if the existence of such incentives is certain and common knowledge, then in many cases, there exists a separating equilibrium for the market where information is fully aggregated. This equilibrium also maximizes social welfare for convex outside payoff functions. At this equilibrium, the participant with outside incentives makes a costly move to gain the trust of other participants. When the existence of outside incentives is uncertain, however, trust cannot be established between players if the outside incentive is sufficiently large and we lose the separability in equilibrium.


Designing Markets for Prediction

AI Magazine

In this article, we survey a number of mechanisms created to elicit predictions, many newly proposed within the last decade. We focus on the engineering questions: How do they work and why? What factors and goals are most important in their design? The primary goal of a prediction mechanism is to obtain and aggregate dispersed information, which often exists in tacit forms as beliefs, opinions, or judgements of agents. Coalescing information is a necessary first step for decision making in almost all domains. For example, consider seasonal influenza, a significant cause of illness and death around the world. Although it recurs every year, the geographic location, timing, magnitude, and duration of outbreaks vary widely. Many people possess relevant pieces of the full information puzzle, including doctors who meet patients, clinical microbiologists who perform respiratory culture tests, pharmacists who fill prescriptions, people who have the flu, and people who know people who have the flu.


Truth, Justice, and Cake Cutting

AAAI Conferences

Cake cutting is a common metaphor for the division of a heterogeneous divisible good. There are numerous papers that study the problem of fairly dividing a cake; a small number of them also take into account self-interested agents and consequent strategic issues, but these papers focus on fairness and consider a strikingly weak notion of truthfulness. In this paper we investigate the problem of cutting a cake in a way that is truthful and fair, where for the first time our notion of dominant strategy truthfulness is the ubiquitous one in social choice and computer science. We design both deterministic and randomized cake cutting algorithms that are truthful and fair under different assumptions with respect to the valuation functions of the agents.


A New Understanding of Prediction Markets Via No-Regret Learning

arXiv.org Artificial Intelligence

We explore the striking mathematical connections that exist between market scoring rules, cost function based prediction markets, and no-regret learning. We show that any cost function based prediction market can be interpreted as an algorithm for the commonly studied problem of learning from expert advice by equating trades made in the market with losses observed by the learning algorithm. If the loss of the market organizer is bounded, this bound can be used to derive an O(sqrt(T)) regret bound for the corresponding learning algorithm. We then show that the class of markets with convex cost functions exactly corresponds to the class of Follow the Regularized Leader learning algorithms, with the choice of a cost function in the market corresponding to the choice of a regularizer in the learning problem. Finally, we show an equivalence between market scoring rules and prediction markets with convex cost functions. This implies that market scoring rules can also be interpreted naturally as Follow the Regularized Leader algorithms, and may be of independent interest. These connections provide new insight into how it is that commonly studied markets, such as the Logarithmic Market Scoring Rule, can aggregate opinions into accurate estimates of the likelihood of future events.