Goto

Collaborating Authors

Minimax Error of Interpolation and Optimal Design of Experiments for Variable Fidelity Data

arXiv.org Machine Learning

Engineering problems often involve data sources of variable fidelity with different costs of obtaining an observation. In particular, one can use both a cheap low fidelity function (e.g. a computational experiment with a CFD code) and an expensive high fidelity function (e.g. a wind tunnel experiment) to generate a data sample in order to construct a regression model of a high fidelity function. The key question in this setting is how the sizes of the high and low fidelity data samples should be selected in order to stay within a given computational budget and maximize accuracy of the regression model prior to committing resources on data acquisition. In this paper we obtain minimax interpolation errors for single and variable fidelity scenarios for a multivariate Gaussian process regression. Evaluation of the minimax errors allows us to identify cases when the variable fidelity data provides better interpolation accuracy than the exclusively high fidelity data for the same computational budget. These results allow us to calculate the optimal shares of variable fidelity data samples under the given computational budget constraint. Real and synthetic data experiments suggest that using the obtained optimal shares often outperforms natural heuristics in terms of the regression accuracy.


A Strategy for Adaptive Sampling of Multi-fidelity Gaussian Process to Reduce Predictive Uncertainty

arXiv.org Machine Learning

Multi-fidelity Gaussian process is a common approach to address the extensive computationally demanding algorithms such as optimization, calibration and uncertainty quantification. Adaptive sampling for multi-fidelity Gaussian process is a changing task due to the fact that not only we seek to estimate the next sampling location of the design variable, but also the level of the simulator fidelity. This issue is often addressed by including the cost of the simulator as an another factor in the searching criterion in conjunction with the uncertainty reduction metric. In this work, we extent the traditional design of experiment framework for the multi-fidelity Gaussian process by partitioning the prediction uncertainty based on the fidelity level and the associated cost of execution. In addition, we utilize the concept of Believer which quantifies the effect of adding an exploratory design point on the Gaussian process uncertainty prediction. We demonstrated our framework using academic examples as well as a industrial application of steady-state thermodynamic operation point of a fluidized bed process


Bifidelity data-assisted neural networks in nonintrusive reduced-order modeling

arXiv.org Machine Learning

In this paper, we present a new nonintrusive reduced basis method when a cheap low-fidelity model and expensive high-fidelity model are available. The method relies on proper orthogonal decomposition (POD) to generate the high-fidelity reduced basis and a shallow multilayer perceptron to learn the high-fidelity reduced coefficients. In contrast to other methods, one distinct feature of the proposed method is to incorporate the features extracted from the low-fidelity data as the input feature, this approach not only improves the predictive capability of the neural network but also enables the decoupling the high-fidelity simulation from the online stage. Due to its nonin-trusive nature, it is applicable to general parameterized problems. We also provide several numerical examples to illustrate the effectiveness and performance of the proposed method.


Sequential design of experiments to estimate a probability of exceeding a threshold in a multi-fidelity stochastic simulator

arXiv.org Machine Learning

In this article, we consider a stochastic numerical simulator to assess the impact of some factors on a phenomenon. The simulator is seen as a black box with inputs and outputs. The quality of a simulation, hereafter referred to as fidelity, is assumed to be tunable by means of an additional input of the simulator (e.g., a mesh size parameter): high-fidelity simulations provide more accurate results, but are time-consuming. Using a limited computation-time budget, we want to estimate, for any value of the physical inputs, the probability that a certain scalar output of the simulator will exceed a given critical threshold at the highest fidelity level. The problem is addressed in a Bayesian framework, using a Gaussian process model of the multi-fidelity simulator. We consider a Bayesian estimator of the probability, together with an associated measure of uncertainty, and propose a new multi-fidelity sequential design strategy, called Maximum Speed of Uncertainty Reduction (MSUR), to select the value of physical inputs and the fidelity level of new simulations. The MSUR strategy is tested on an example.


Accurate machine learning in materials science facilitated by using diverse data sources

Nature

Scientists are always hunting for materials that have superior properties. They therefore continually synthesize, characterize and measure the properties of new materials using a range of experimental techniques. Computational modelling is also used to estimate the properties of materials. However, there is usually a trade-off between the cost of the experiments (or simulations) and the accuracy of the measurements (or estimates), which has limited the number of materials that can be tested rigorously. Writing in Nature Computational Science, Chen et al.1 report a machine-learning approach that combines data from multiple sources of measurements and simulations, all of which have different levels of approximation, to learn and predict materials' properties. Their method allows the construction of a more general and accurate model of such properties than was previously possible, thereby facilitating the screening of promising material candidates.