Uncertainty Quantification for Machine Learning-Based Prediction: A Polynomial Chaos Expansion Approach for Joint Model and Input Uncertainty Propagation

Du, Xiaoping

arXiv.org Machine Learning 

Machine learning (ML) surrogate models are increasingly used in engineering analysis and design to replace computationally expensive simulation models, significantly reducing computational cost and accelerating decision-making processes. However, ML predictions contain inherent errors, often estimated as model uncertainty, which is coupled with variability in model inputs. Accurately quantifying and propagating these combined uncertainties is essential for generating reliable engineering predictions. This paper presents a robust framework based on Polynomial Chaos Expansion (PCE) to handle joint input and model uncertainty propagation. While the approach applies broadly to general ML surrogates, we focus on Gaussian Process regression models, which provide explicit predictive distributions for model uncertainty. By transforming all random inputs into a unified standard space, a PCE surrogate model is constructed, allowing efficient and accurate calculation of the mean and standard deviation of the output. The proposed methodology also offers a mechanism for global sensitivity analysis, enabling the accurate quantification of the individual contributions of input variables and ML model uncertainty to the overall output variability. This approach provides a computationally efficient and interpretable framework for comprehensive uncertainty quantification, supporting trustworthy ML predictions in downstream engineering applications.