The problem of market clearing is to set a price for an item such that quantity demanded equals quantity supplied. In this work, we cast the problem of predicting clearing prices into a learning framework and use the resulting models to perform revenue optimization in auctions and markets with contextual information. The economic intuition behind market clearing allows us to obtain fine-grained control over the aggressiveness of the resulting pricing policy, grounded in theory. To evaluate our approach, we fit a model of clearing prices over a massive dataset of bids in display ad auctions from a major ad exchange. The learned prices outperform other modeling techniques in the literature in terms of revenue and efficiency trade-offs. Because of the convex nature of the clearing loss function, the convergence rate of our method is as fast as linear regression.
Selling personal information is very different from selling physical goods, and raises novel challenges. On the sell-side of the market, individuals own their own personal data and experience costs based on the usage of their data insofar as that usage leads to future quantifiable harm. On the buy-side of the market, buyers are interested in "statistical information" about the dataset, that is, aggregate information, rather than information derived from a single individual. Differential privacy1 provides a means to quantify the harm that can come to individual data owners as the result of the use of their data. This ability to quantify harm allows for data owners to be compensated for the risk they incur. Past work studying markets for private data focused on the simple case in which the buyer is interested in only the answer to a single linear function of the data,2,3,4,6 which makes the buy-side of the market particularly simple.
Data analytics using machine learning (ML) has become ubiquitous in science, business intelligence, journalism and many other domains. While a lot of work focuses on reducing the training cost, inference runtime and storage cost of ML models, little work studies how to reduce the cost of data acquisition, which potentially leads to a loss of sellers' revenue and buyers' affordability and efficiency. In this paper, we propose a model-based pricing (MBP) framework, which instead of pricing the data, directly prices ML model instances. We first formally describe the desired properties of the MBP framework, with a focus on avoiding arbitrage. Next, we show a concrete realization of the MBP framework via a noise injection approach, which provably satisfies the desired formal properties. Based on the proposed framework, we then provide algorithmic solutions on how the seller can assign prices to models under different market scenarios (such as to maximize revenue). Finally, we conduct extensive experiments, which validate that the MBP framework can provide high revenue to the seller, high affordability to the buyer, and also operate on low runtime cost.
We examine the design space of auction mechanisms and identify three core activities that structure this space. Formal parameters qualifying the performance of core activities enable precise specification of auction rules. This specification constitutes an auction description language that can be used in the implementation of configurable marketplaces. The specification also provides a framework for organizing previous work and identifying new possibilities in auction design. Given that many multiagent systems involve the allocation of resources, it is natural that the connection between AI and economics has become a common theme in AI. This emphasis is also certainly influenced by the automation of commercial activities on the internet and the potential benefits of intelligent software support for these economic activities. Auctions are central to this confluence of research agendas because they represent a class of basic mechanisms by which economic systems compute the outcome of ...
We discuss a data market technique based on intrinsic (relevance and uniqueness) as well as extrinsic value (influenced by supply and demand) of data. For intrinsic value, we explain how to perform valuation of data in absolute terms (i.e just by itself), or relatively (i.e in comparison to multiple datasets) or in conditional terms (i.e valuating new data given currently existing data).