Goto

Collaborating Authors

 failure probability


High-dimensional reliability-based design optimization using stochastic emulators

Moustapha, M., Sudret, B.

arXiv.org Machine Learning

Reliability-based design optimization (RBDO) is traditionally formulated as a nested optimization and reliability problem. Although surrogate models are generally employed to improve efficiency, the approach remains computationally prohibitive in high-dimensional settings. This paper proposes a novel RBDO framework based on a stochastic simulator viewpoint, in which the deterministic limit-state function and the uncertainty in the model inputs are combined into a unified stochastic representation. Under this formulation, the system response conditioned on a given design is modeled directly through its output distribution, rather than through an explicit limit-state function. Stochastic emulators are constructed in the design space to approximate the conditional response distribution, enabling the semi-analytical evaluation of failure probabilities or associated quantiles without resorting to Monte Carlo simulation. Two classes of stochastic emulators are investigated, namely generalized lambda models and stochastic polynomial chaos expansions. Both approaches provide a deterministic mapping between design variables and reliability constraints, which breaks the classical double-loop structure of RBDO and allows the use of standard deterministic optimization algorithms. The performance of the proposed approach is evaluated on a set of benchmark problems with dimensionality ranging from low to very high, including a case with stochastic excitation. The results are compared against a Kriging-based approach formulated in the full input space. The proposed method yields substantial computational gains, particularly in high-dimensional settings. While its efficiency is comparable to Kriging for low-dimensional problems, it significantly outperforms Kriging as the dimensionality increases.


The Geometry of Efficient Nonconvex Sampling

Vempala, Santosh S., Wibisono, Andre

arXiv.org Machine Learning

We present an efficient algorithm for uniformly sampling from an arbitrary compact body $\mathcal{X} \subset \mathbb{R}^n$ from a warm start under isoperimetry and a natural volume growth condition. Our result provides a substantial common generalization of known results for convex bodies and star-shaped bodies. The complexity of the algorithm is polynomial in the dimension, the Poincaré constant of the uniform distribution on $\mathcal{X}$ and the volume growth constant of the set $\mathcal{X}$.








A Proofs from Section 2 448 Algorithm 4: Output ˆ α null G1 (1 η

Neural Information Processing Systems

Return ˆ α We show the following generalization of Proposition 2.1. Moreover, Alg. 4 has sample complexity The sample complexity is clear so we focus on the first statement. Theorem 4.5 in [MU17]) on these events as i varies and noting that Hence recalling (A.2) above, we conclude that The other direction is similar. Using (A.2) in the same way as above, we find First we analyze the expected sample complexity. Finally Alg. 4 has sample complexity We do this using Bayes' rule.