Goto

Collaborating Authors

Adaptive Design of Experiments for Conservative Estimation of Excursion Sets

arXiv.org Machine Learning

We consider a Gaussian process model trained on few evaluations of an expensive-to-evaluate deterministic function and we study the problem of estimating a fixed excursion set of this function. We review the concept of conservative estimates, recently introduced in this framework, and, in particular, we focus on estimates based on Vorob'ev quantiles. We present a method that sequentially selects new evaluations of the function in order to reduce the uncertainty on such estimates. The sequential strategies are first benchmarked on artificial test cases generated from Gaussian process realizations in two and five dimensions, and then applied to two reliability engineering test cases.


Active Learning for Gaussian Process Considering Uncertainties with Application to Shape Control of Composite Fuselage

arXiv.org Machine Learning

This paper has been accepted by IEEE Transactions on Automation Science and Engineering. 1 This preprint is an accepted version, not the IEEE published version. Abstract--In the machine learning domain, active learning is an iterative data selection algorithm for maximizing information acquisition and improving model performance with limited training samples. It is very useful, especially for the industrial applications where training samples are expensive, time-consuming, or difficult to obtain. Existing methods mainly focus on active learning for classification, and a few methods are designed for regression such as linear regression or Gaussian process. Uncertainties from measurement errors and intrinsic input noise inevitably exist in the experimental data, which further affects the modeling performance. The existing active learning methods do not incorporate these uncertainties for Gaussian process. In this paper, we propose two new active learning algorithms for the Gaussian process with uncertainties, which are variance-based weighted active learning algorithm and D-optimal weighted active learning algorithm. Through numerical study, we show that the proposed approach can incorporate the impact from uncertainties, and realize better prediction performance. This approach has been applied to improving the predictive modeling for automatic shape control of composite fuselage. I. INTRODUCTION Active learning is a type of iterative supervised learning which focuses on maximizing information acquisition with limited samples. In statistics literature, this process is also called optimal experimental design, or sequential design. The main idea of active learning is to iteratively pose "query" or "design" to explore the most informative new experimental samples according to the information obtained from the current samples. In many machine learning applications, especially in some industrial systems, the explanatory data are rich and easy to get, but the response data are very expensive, time-consuming, or difficult to obtain. For example, when training autonomous driving algorithms, a lot of media (e.g., images, videos) require that oracle users mark them with particular labels, such as "vehicle", "street sign" or "road lines". It can be tedious, redundant and time-consuming to annotate lots of these instances.


Evaluating Gaussian Process Metamodels and Sequential Designs for Noisy Level Set Estimation

arXiv.org Machine Learning

We consider the problem of learning the level set for which a noisy black-box function exceeds a given threshold. To efficiently reconstruct the level set, we investigate Gaussian process (GP) metamodels. Our focus is on strongly stochastic samplers, in particular with heavy-tailed simulation noise and low signal-to-noise ratio. To guard against noise misspecification, we assess the performance of three variants: (i) GPs with Student-$t$ observations; (ii) Student-$t$ processes (TPs); and (iii) classification GPs modeling the sign of the response. In conjunction with these metamodels, we analyze several acquisition functions for guiding the sequential experimental designs, extending existing stepwise uncertainty reduction criteria to the stochastic contour-finding context. This also motivates our development of (approximate) updating formulas to efficiently compute such acquisition functions. Our schemes are benchmarked by using a variety of synthetic experiments in 1--6 dimensions. We also consider an application of level set estimation for determining the optimal exercise policy of Bermudan options in finance.


Gradient descent in Gaussian random fields as a toy model for high-dimensional optimisation in deep learning

arXiv.org Machine Learning

In this paper we model the loss function of high-dimensional optimization problems by a Gaussian random field, or equivalently a Gaussian process. Our aim is to study gradient descent in such loss functions or energy landscapes and compare it to results obtained from real high-dimensional optimization problems such as encountered in deep learning. In particular, we analyze the distribution of the improved loss function after a step of gradient descent, provide analytic expressions for the moments as well as prove asymptotic normality as the dimension of the parameter space becomes large. Moreover, we compare this with the expectation of the global minimum of the landscape obtained by means of the Euler characteristic of excursion sets. Besides complementing our analytical findings with numerical results from simulated Gaussian random fields, we also compare it to loss functions obtained from optimisation problems on synthetic and real data sets by proposing a "black box" random field toy-model for a deep neural network loss function.


A new integral loss function for Bayesian optimization

arXiv.org Machine Learning

We consider the problem of maximizing a real-valued continuous function $f$ using a Bayesian approach. Since the early work of Jonas Mockus and Antanas \v{Z}ilinskas in the 70's, the problem of optimization is usually formulated by considering the loss function $\max f - M_n$ (where $M_n$ denotes the best function value observed after $n$ evaluations of $f$). This loss function puts emphasis on the value of the maximum, at the expense of the location of the maximizer. In the special case of a one-step Bayes-optimal strategy, it leads to the classical Expected Improvement (EI) sampling criterion. This is a special case of a Stepwise Uncertainty Reduction (SUR) strategy, where the risk associated to a certain uncertainty measure (here, the expected loss) on the quantity of interest is minimized at each step of the algorithm. In this article, assuming that $f$ is defined over a measure space $(\mathbb{X}, \lambda)$, we propose to consider instead the integral loss function $\int_{\mathbb{X}} (f - M_n)_{+}\, d\lambda$, and we show that this leads, in the case of a Gaussian process prior, to a new numerically tractable sampling criterion that we call $\rm EI^2$ (for Expected Integrated Expected Improvement). A numerical experiment illustrates that a SUR strategy based on this new sampling criterion reduces the error on both the value and the location of the maximizer faster than the EI-based strategy.