Goto

Collaborating Authors

 Lubin, Benjamin


Computing Bayes-Nash Equilibria in Combinatorial Auctions with Verification

Journal of Artificial Intelligence Research

We present a new algorithm for computing pure-strategy ε-Bayes-Nash equilibria (ε-BNEs) in combinatorial auctions with continuous value and action spaces. An essential innovation of our algorithm is to separate the algorithm's search phase (for finding the ε-BNE) from the verification phase (for computing the ε). Using this approach, we obtain an algorithm that is both very fast and provides theoretical guarantees on the ε it finds. Our main technical contribution is a verification method which allows us to upper bound the ε across the whole continuous value space without making assumptions about the mechanism. Using our algorithm, we can now compute ε-BNEs in multi-minded domains that are significantly more complex than what was previously possible to solve. We release our code under an open-source license to enable researchers to perform algorithmic analyses of auctions, to enable bidders to analyze different strategies, and to facilitate many other applications.


Probably Approximately Efficient Combinatorial Auctions via Machine Learning

AAAI Conferences

A well-known problem in combinatorial auctions (CAs) is that the value space grows exponentially in the number of goods, which often puts a large burden on the bidders and on the auctioneer. In this paper, we introduce a new design paradigm for CAs based on machine learning (ML). Bidders report their values (bids) to a proxy agent by answering a small number of value queries. The proxy agent then uses an ML algorithm to generalize from those bids to the whole value space, and the efficient allocation is computed based on the generalized valuations. We introduce the concept of "probably approximate efficiency (PAE)" to measure the efficiency of the new ML-based auctions, and we formally show how the generelizability of an ML algorithm relates to the efficiency loss incurred by the corresponding ML-based auction. To instantiate our paradigm, we use support vector regression (SVR) as our ML algorithm, which enables us to keep the winner determination problem of the CA tractable. Different parameters of the SVR algorithm allow us to trade off the expressiveness, economic efficiency, and computational efficiency of the CA. Finally, we demonstrate experimentally that, even with a small number of bids, our ML-based auctions are highly efficient with high probability.


Modeling Multi-Attribute Demand for Sustainable Cloud Computing with Copulae

AAAI Conferences

As cloud computing gains in popularity, understanding the patterns and structure of its loads is increasingly important in order to drive effective resource allocation, scheduling and pricing decisions. These efficiency increases are then associated with a reduction in the data center environmental footprint. Existing models have only treated a single resource type, such as CPU, or memory, at a time. We offer a sophisticated machine learning approach to capture the joint-distribution. We capture the relationship among multiple resources by carefully fitting both the marginal distributions of each resource type as well as the non-linear structure of their correlation via a copula distribution. We investigate several choices for both models by studying a public data set of Google data-center usage. We show the Burr XII distribution to be a particularly effective choice for modeling the marginals and the Frank copula to be the best choice for stitching these together into a joint distribution. Our approach offers a significant fidelity improvement and generalizes directly to higher dimensions. In use, this improvement will translate directly to reductions in energy consumption.


A Faster Core Constraint Generation Algorithm for Combinatorial Auctions

AAAI Conferences

Computing prices in core-selecting combinatorial auctions is a computationally hard problem. Auctions with many bids can only be solved using a recently proposed core constraint generation (CCG) algorithm, which may still take days on hard instances. In this paper, we present a new algorithm that significantly outperforms the current state of the art. Towards this end, we first provide an alternative definition of the set of core constraints, where each constraint is weakly stronger, and prove that together these constraints define the identical polytope to the previous definition. Using these new theoretical insights we develop two new algorithmic techniques which generate additional constraints in each iteration of the CCG algorithm by 1) exploiting separability in allocative conflicts between participants in the auction, and 2) by leveraging non-optimal solutions. We show experimentally that our new algorithm leads to significant speed-ups on a variety of large combinatorial auction problems. Our work provides new insights into the structure of core constraints and advances the state of the art in fast algorithms for computing core prices in large combinatorial auctions.


Payment Rules through Discriminant-Based Classifiers

arXiv.org Artificial Intelligence

In mechanism design it is typical to impose incentive compatibility and then derive an optimal mechanism subject to this constraint. By replacing the incentive compatibility requirement with the goal of minimizing expected ex post regret, we are able to adapt statistical machine learning techniques to the design of payment rules. This computational approach to mechanism design is applicable to domains with multi-dimensional types and situations where computational efficiency is a concern. Specifically, given an outcome rule and access to a type distribution, we train a support vector machine with a special discriminant function structure such that it implicitly establishes a payment rule with desirable incentive properties. We discuss applications to a multi-minded combinatorial auction with a greedy winner-determination algorithm and to an assignment problem with egalitarian outcome rule. Experimental results demonstrate both that the construction produces payment rules with low ex post regret, and that penalizing classification errors is effective in preventing failures of ex post individual rationality.