GAI Networks for Utility Elicitation

AAAI Conferences

Assuming the decision maker behaves according to the EU model, we investigate the elicitation of generalized additively decomposable utility functions on a product set (GAI-decomposable utilities). We propose a general elicitation procedure based on a new graphical model called a GAI-network. The latter is used to represent and manage independences between attributes, as junction graphs model independences between random variables in Bayesian networks. It is used to design an elicitation questionnaire based on simple lotteries involving completely specified outcomes. Our elicitation procedure is convenient for any GAI-decomposable utility function, thus enhancing the possibilities offered by UCP-networks.

Machine learning prowess on display


More than 80 Amazon scientists and engineers will attend this year's International Conference on Machine Learning (ICML) in Stockholm, Sweden, with 11 papers co-authored by Amazonians being presented. "ICML is one of the leading outlets for machine learning research," says Neil Lawrence, director of machine learning for Amazon's Supply Chain Optimization Technologies program. "It's a great opportunity to find out what other researchers have been up to and share some of our own learnings." At ICML, members of Lawrence's team will present a paper titled "Structured Variationally Auto-encoded Optimization," which describes a machine-learning approach to optimization, or choosing the values for variables in some process that maximize a particular outcome. The first author on the paper is Xiaoyu Lu, a graduate student at the University of Oxford who worked on the project as an intern at Amazon last summer, then returned in January to do some follow-up work.

Adversarial $\alpha$-divergence Minimization for Bayesian Approximate Inference Machine Learning

Neural networks are popular models for regression. They are often trained via back-propagation to find a value of the weights that correctly predicts the observed data. Although back-propagation has shown good performance in many applications, it cannot easily output an estimate of the uncertainty in the predictions made. Measuring this uncertainty in the predictions of machine learning models is a critical aspect with important applications. Uncertainty estimates can be obtained by following a Bayesian approach in which a posterior distribution of the model parameters is computed. The posterior distribution summarizes which parameter values are compatible with the data. Typically,this posterior distribution is intractable and has to be approximated. Several approaches have been considered for solving this problem. We propose here a general method for approximate Bayesian inference based on minimizing{\alpha}-divergences which allows for flexible approximate distributions. The method is evaluated in the context of Bayesian neural networks for regression on extensive experiments. The results show that it often gives better performance in terms of the test log-likelihood and sometimes in terms of the squared error.

Some Properties of Batch Value of Information in the Selection Problem

Journal of Artificial Intelligence Research

Given a set of items of unknown utility, we need to select one with a utility as high as possible ("the selection problem"). Measurements (possibly noisy) of item values prior to selection are allowed, at a known cost. The goal is to optimize the overall sequential decision process of measurements and selection. Value of information (VOI) is a well-known scheme for selecting measurements, but the intractability of the problem typically leads to using myopic VOI estimates. Other schemes have also been proposed, some with approximation guarantees, based on submodularity criteria. However, it was observed that the VOI is not submodular in general. In this paper we examine theoretical properties of VOI for the selection problem, and identify cases of submodularity and supermodularity. We suggest how to use these properties to compute approximately optimal measurement batch policies, with an example based on a "wine selection problem".

Yelling at Amazon's Alexa

The New Yorker

The first time I met Alexa, the A.I. robot voice inside the wine-bottle-size speaker known as the Amazon Echo, I was at my friends' house, in rural New England. "Currently, it is seventy-five degrees," she told us, and assured us that it would not rain. This was a year ago, and I'd never encountered a talking speaker before. When I razzed my friend for his love of gadgetry, he showed me some of Alexa's other tricks: telling us the weather, keeping a shopping list, ordering products from Amazon. This summer, Alexa decided again and again who the tickle monster's next victim was, saying their children's adorable nicknames in her strange A.I. accent.