Purpose: Bayesian calibration is theoretically superior to standard direct-search algorithm because it can reveal the full joint posterior distribution of the calibrated parameters. However, to date, Bayesian calibration has not been used often in health decision sciences due to practical and computational burdens. In this paper we propose to use artificial neural networks (ANN) as one solution to these limitations. Methods: Bayesian Calibration using Artificial Neural Networks (BayCANN) involves (1) training an ANN metamodel on a sample of model inputs and outputs, and (2) then calibrating the trained ANN metamodel instead of the full model in a probabilistic programming language to obtain the posterior joint distribution of the calibrated parameters. We demonstrate BayCANN by calibrating a natural history model of colorectal cancer to adenoma prevalence and cancer incidence data. In addition, we compare the efficiency and accuracy of BayCANN against performing a Bayesian calibration directly on the simulation model using an incremental mixture importance sampling (IMIS) algorithm. Results: BayCANN was generally more accurate than IMIS in recovering the "true" parameter values. The ratio of the absolute ANN deviation from the truth compared to IMIS for eight out of the nine calibrated parameters were less than one indicating that BayCANN was more accurate than IMIS. In addition, BayCANN took about 15 minutes total compared to the IMIS method which took 80 minutes. Conclusions: In our case study, BayCANN was more accurate than IMIS and was five-folds faster. Because BayCANN does not depend on the structure of the simulation model, it can be adapted to models of various levels of complexity with minor changes to its structure. We provide BayCANN's open-source implementation in R.
In this paper, a machine learning-based simulation framework of general-purpose multibody dynamics is introduced. The aim of the framework is to generate a well-trained meta-model of multibody dynamics (MBD) systems. To this end, deep neural network (DNN) is employed to the framework so as to construct data-based meta-model representing multibody systems. Constructing well-defined training data set with time variable is essential to get accurate and reliable motion data such as displacement, velocity, acceleration, and forces. As a result of the introduced approach, the meta-model provides motion estimation of system dynamics without solving the analytical equations of motion. The performance of the proposed DNN meta-modeling was evaluated to represent several MBD systems.
In this paper, we propose a new end-to-end methodology to optimize the energy performance and the comfort, air quality and hygiene of large buildings. A metamodel based on a Transformer network is introduced and trained using a dataset sampled with a simulation program. Then, a few physical parameters and the building management system settings of this metamodel are calibrated using the CMA-ES optimization algorithm and real data obtained from sensors. Finally, the optimal settings to minimize the energy loads while maintaining a target thermal comfort and air quality are obtained using a multi-objective optimization procedure. The numerical experiments illustrate how this metamodel ensures a significant gain in energy efficiency while being computationally much more appealing than models requiring a huge number of physical parameters to be estimated.
We present a novel technique for assessing the dynamics of multiphase fluid flow in the oil reservoir. We demonstrate an efficient workflow for handling the 3D reservoir simulation data in a way which is orders of magnitude faster than the conventional routine. The workflow (we call it "Metamodel") is based on a projection of the system dynamics into a latent variable space, using Variational Autoencoder model, where Recurrent Neural Network predicts the dynamics. We show that being trained on multiple results of the conventional reservoir modelling, the Metamodel does not compromise the accuracy of the reservoir dynamics reconstruction in a significant way. It allows forecasting not only the flow rates from the wells, but also the dynamics of pressure and fluid saturations within the reservoir. The results open a new perspective in the optimization of oilfield development as the scenario screening could be accelerated sufficiently.
Computer simulation has become the standard tool in many engineering fields for designing and optimizing systems, as well as for assessing their reliability. To cope with demanding analysis such as optimization and reliability, surrogate models (a.k.a meta-models) have been increasingly investigated in the last decade. Polynomial Chaos Expansions (PCE) and Kriging are two popular non-intrusive meta-modelling techniques. PCE surrogates the computational model with a series of orthonormal polynomials in the input variables where polynomials are chosen in coherency with the probability distributions of those input variables. On the other hand, Kriging assumes that the computer model behaves as a realization of a Gaussian random process whose parameters are estimated from the available computer runs, i.e. input vectors and response values. These two techniques have been developed more or less in parallel so far with little interaction between the researchers in the two fields. In this paper, PC-Kriging is derived as a new non-intrusive meta-modeling approach combining PCE and Kriging. A sparse set of orthonormal polynomials (PCE) approximates the global behavior of the computational model whereas Kriging manages the local variability of the model output. An adaptive algorithm similar to the least angle regression algorithm determines the optimal sparse set of polynomials. PC-Kriging is validated on various benchmark analytical functions which are easy to sample for reference results. From the numerical investigations it is concluded that PC-Kriging performs better than or at least as good as the two distinct meta-modeling techniques. A larger gain in accuracy is obtained when the experimental design has a limited size, which is an asset when dealing with demanding computational models.