This workshop will attempt to present some of the very recent developments on non-convex analysis and optimization, as reported in diverse research fields: from machine learning and mathematical programming to statistics and theoretical computer science. We believe that this workshop can bring researchers closer, in order to facilitate a discussion regarding why tackling non-convexity is important, where it is found, why non-convex schemes work well in practice and, how we can progress further with interesting research directions and open problems.
AAAI/SIGART Doctoral Consortium, and the second AAAI Educational Advances in Artificial Intelligence Symposium, to name only a few of the AAAI is pleased to present the 2011 Spring Symposium Series, to highlights. For complete information be held Monday through Wednesday, March 21-23, 2011, at on these programs, including Tutorial Stanford University.
Markov Chain Monte Carlo (MCMC) methods have a drawback when working with a target distribution or likelihood function that is computationally expensive to evaluate, specially when working with big data. This paper focuses on Metropolis-Hastings (MH) algorithm for unimodal distributions. Here, an enhanced MH algorithm is proposed that requires less number of expensive function evaluations, has shorter burn-in period, and uses a better proposal distribution. The main innovations include the use of Bayesian optimization to reach the high probability region quickly, emulating the target distribution using Gaussian processes (GP), and using Laplace approximation of the GP to build a proposal distribution that captures the underlying correlation better. The experiments show significant improvement over the regular MH. Statistical comparison between the results from two algorithms is presented.
Organizers of the 37th International Conference on Machine Learning (ICML) have announced this year's Test of Time award, which goes to a team from the California Institute of Technology, University of Pennsylvania, Saarland University. The ICML Test of Time award recognizes an ICML paper from ten years ago that has proven influential, with significant impacts in the field, "including both research and practice." Authors: Niranjan Srinivas, Andreas Krause, Sham Kakade, Matthias Seeger Institutions: California Institute of Technology, University of Pennsylvania, Saarland University Abstract: Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization.
The purpose of this paper is twofold. On one side, we present a general framework for Bayesian optimization and we compare it with some related fields in active learning and Bayesian numerical analysis. On the other hand, Bayesian optimization and related problems (bandits, sequential experimental design) are highly dependent on the surrogate model that is selected. However, there is no clear standard in the literature. Thus, we present a fast and flexible toolbox that allows to test and combine different models and criteria with little effort. It includes most of the state-of-the-art contributions, algorithms and models. Its speed also removes part of the stigma that Bayesian optimization methods are only good for "expensive functions". The software is free and it can be used in many operating systems and computer languages.