Goto

Collaborating Authors

 Constantin, Florin


Learning Valuation Functions

arXiv.org Machine Learning

In this paper we study the approximate learnability of valuations commonly used throughout economics and game theory for the quantitative encoding of agent preferences. We provide upper and lower bounds regarding the learnability of important subclasses of valuation functions that express no-complementarities. Our main results concern their approximate learnability in the distributional learning (PAC-style) setting. We provide nearly tight lower and upper bounds of $\tilde{\Theta}(n^{1/2})$ on the approximation factor for learning XOS and subadditive valuations, both widely studied superclasses of submodular valuations. Interestingly, we show that the $\tilde{\Omega}(n^{1/2})$ lower bound can be circumvented for XOS functions of polynomial complexity; we provide an algorithm for learning the class of XOS valuations with a representation of polynomial size achieving an $O(n^{\eps})$ approximation factor in time $O(n^{1/\eps})$ for any $\eps > 0$. This highlights the importance of considering the complexity of the target function for polynomial time learning. We also provide new learning results for interesting subclasses of submodular functions. Our upper bounds for distributional learning leverage novel structural results for all these valuation classes. We show that many of these results provide new learnability results in the Goemans et al. model (SODA 2009) of approximate learning everywhere via value queries. We also introduce a new model that is more realistic in economic settings, in which the learner can set prices and observe purchase decisions at these prices rather than observing the valuation function directly. In this model, most of our upper bounds continue to hold despite the fact that the learner receives less information (both for learning in the distributional setting and with value queries), while our lower bounds naturally extend.


On Expressing Value Externalities in Position Auctions

AAAI Conferences

We introduce a bidding language for expressing negative value externalities in position auctions for online advertising. The unit-bidder constraints (UBC) language allows a bidder to condition a bid on its allocated slot and on the slots allocated to other bidders. We introduce a natural extension of the Generalized Second Price (GSP) auction, the expressive GSP (eGSP) auction, that induces truthful revelation of constraints for a rich subclass of unit-bidder types, namely downward-monotonic UBC. We establish the existence of envy-free Nash equilibrium in eGSP under a further restriction to a subclass of exclusion constraints, for which the standard GSP has no pure strategy Nash equilibrium. The equilibrium results are obtained by reduction to equilibrium analysis for reserve price GSP (Even-Dar et al. 2008). In considering the winner determination problem, which is NP-hard, we bound the approximation ratio for social welfare in eGSP and provide parameterized complexity results.


Dynamic Incentive Mechanisms

AI Magazine

Much of AI is concerned with the design of intelligent agents. As we extend the ideas of mechanism design from economic theory, the mechanisms (or rules) become algorithmic and many new challenges surface. Starting with a short background on mechanism design theory, the aim of this paper is to provide a nontechnical exposition of recent results on dynamic incentive mechanisms, which provide rules for the coordination of agents in sequential decision problems. The framework of dynamic mechanism design embraces coordinated decision-making both in the context of uncertainty about the world external to an agent and also in regard to the dynamics of agent preferences.


Dynamic Incentive Mechanisms

AI Magazine

Much of AI is concerned with the design of intelligent agents. A complementary challenge is to understand how to design “rules of encounter” by which to promote simple, robust and beneficial interactions between multiple intelligent agents. This is a natural development, as AI is increasingly used for automated decision making in real-world settings. As we extend the ideas of mechanism design from economic theory, the mechanisms (or rules) become algorithmic and many new challenges surface. Starting with a short background on mechanism design theory, the aim of this paper is to provide a nontechnical exposition of recent results on dynamic incentive mechanisms, which provide rules for the coordination of agents in sequential decision problems. The framework of dynamic mechanism design embraces coordinated decision-making both in the context of uncertainty about the world external to an agent and also in regard to the dynamics of agent preferences. In addition to tracing some recent developments, we point to ongoing research challenges.