Goto

Collaborating Authors

 parametric space






An adaptive sampling algorithm for data-generation to build a data-manifold for physical problem surrogate modeling

Mang, Chetra, TahmasebiMoradi, Axel, Danan, David, Yagoubi, Mouadh

arXiv.org Machine Learning

Physical models classically involved Partial Differential equations (PDE) and depending of their underlying complexity and the level of accuracy required, and known to be computationally expensive to numerically solve them. Thus, an idea would be to create a surrogate model relying on data generated by such solver. However, training such a model on an imbalanced data have been shown to be a very difficult task. Indeed, if the distribution of input leads to a poor response manifold representation, the model may not learn well and consequently, it may not predict the outcome with acceptable accuracy. In this work, we present an Adaptive Sampling Algorithm for Data Generation (ASADG) involving a physical model. As the initial input data may not accurately represent the response manifold in higher dimension, this algorithm iteratively adds input data into it. At each step the barycenter of each simplicial complex, that the manifold is discretized into, is added as new input data, if a certain threshold is satisfied. We demonstrate the efficiency of the data sampling algorithm in comparison with LHS method for generating more representative input data. To do so, we focus on the construction of a harmonic transport problem metamodel by generating data through a classical solver. By using such algorithm, it is possible to generate the same number of input data as LHS while providing a better representation of the response manifold.


Introducing a microstructure-embedded autoencoder approach for reconstructing high-resolution solution field data from a reduced parametric space

Koopas, Rasoul Najafi, Rezaei, Shahed, Rauter, Natalie, Ostwald, Richard, Lammering, Rolf

arXiv.org Artificial Intelligence

In this study, we develop a novel multi-fidelity deep learning approach that transforms low-fidelity solution maps into high-fidelity ones by incorporating parametric space information into a standard autoencoder architecture. This method's integration of parametric space information significantly reduces the need for training data to effectively predict high-fidelity solutions from low-fidelity ones. In this study, we examine a two-dimensional steady-state heat transfer analysis within a highly heterogeneous materials microstructure. The heat conductivity coefficients for two different materials are condensed from a 101 x 101 grid to smaller grids. We then solve the boundary value problem on the coarsest grid using a pre-trained physics-informed neural operator network known as Finite Operator Learning (FOL). The resulting low-fidelity solution is subsequently upscaled back to a 101 x 101 grid using a newly designed enhanced autoencoder. The novelty of the developed enhanced autoencoder lies in the concatenation of heat conductivity maps of different resolutions to the decoder segment in distinct steps. Hence the developed algorithm is named microstructure-embedded autoencoder (MEA). We compare the MEA outcomes with those from finite element methods, the standard U-Net, and various other upscaling techniques, including interpolation functions and feedforward neural networks (FFNN). Our analysis shows that MEA outperforms these methods in terms of computational efficiency and error on test cases. As a result, the MEA serves as a potential supplement to neural operator networks, effectively upscaling low-fidelity solutions to high fidelity while preserving critical details often lost in traditional upscaling methods, particularly at sharp interfaces like those seen with interpolation.


Meta Reinforcement Learning with Finite Training Tasks -- a Density Estimation Approach

Rimon, Zohar, Tamar, Aviv, Adler, Gilad

arXiv.org Artificial Intelligence

In meta reinforcement learning (meta RL), an agent learns from a set of training tasks how to quickly solve a new task, drawn from the same task distribution. The optimal meta RL policy, a.k.a. the Bayes-optimal behavior, is well defined, and guarantees optimal reward in expectation, taken with respect to the task distribution. The question we explore in this work is how many training tasks are required to guarantee approximately optimal behavior with high probability. Recent work provided the first such PAC analysis for a model-free setting, where a history-dependent policy was learned from the training tasks. In this work, we propose a different approach: directly learn the task distribution, using density estimation techniques, and then train a policy on the learned task distribution. We show that our approach leads to bounds that depend on the dimension of the task distribution. In particular, in settings where the task distribution lies in a low-dimensional manifold, we extend our analysis to use dimensionality reduction techniques and account for such structure, obtaining significantly better bounds than previous work, which strictly depend on the number of states and actions. The key of our approach is the regularization implied by the kernel density estimation method. We further demonstrate that this regularization is useful in practice, when `plugged in' the state-of-the-art VariBAD meta RL algorithm.


Feature space exploration as an alternative for design space exploration beyond the parametric space

Pedroso, Tomas Cabezon, Rhee, Jinmo, Byrne, Daragh

arXiv.org Artificial Intelligence

This paper compares the parametric design space with a feature space generated by the extraction of design features using deep learning (DL) as an alternative way for design space exploration. In this comparison, the parametric design space is constructed by creating a synthetic dataset of 15.000 elements using a parametric algorithm and reducing its dimensions for visualization. The feature space -- reduced-dimensionality vector space of embedded data features -- is constructed by training a DL model on the same dataset. We analyze and compare the extracted design features by reducing their dimension and visualizing the results. We demonstrate that parametric design space is narrow in how it describes the design solutions because it is based on the combination of individual parameters. In comparison, we observed that the feature design space can intuitively represent design solutions according to complex parameter relationships. Based on our results, we discuss the potential of translating the features learned by DL models to provide a mechanism for intuitive design exploration space and visualization of possible design solutions.


An iterative multi-fidelity approach for model order reduction of multi-dimensional input parametric PDE systems

Chetry, Manisha, Borzacchiello, Domenico, Lestandi, Lucas, Da Silva, Luisa Rocha

arXiv.org Artificial Intelligence

We propose a parametric sampling strategy for the reduction of large-scale PDE systems with multidimensional input parametric spaces by leveraging models of different fidelity. The design of this methodology allows a user to adaptively sample points ad hoc from a discrete training set with no prior requirement of error estimators. It is achieved by exploiting low-fidelity models throughout the parametric space to sample points using an efficient sampling strategy, and at the sampled parametric points, high-fidelity models are evaluated to recover the reduced basis functions. The low-fidelity models are then adapted with the reduced order models ( ROMs) built by projection onto the subspace spanned by the recovered basis functions. The process continues until the low-fidelity model can represent the high-fidelity model adequately for all the parameters in the parametric space. Since the proposed methodology leverages the use of low-fidelity models to assimilate the solution database, it significantly reduces the computational cost in the offline stage. The highlight of this article is to present the construction of the initial low-fidelity model, and a sampling strategy based on the discrete empirical interpolation method (DEIM). We test this approach on a 2D steady-state heat conduction problem for two different input parameters and make a qualitative comparison with the classical greedy reduced basis method (RBM), and further test on a 9-dimensional parametric non-coercive elliptic problem and analyze the computational performance based on different tuning of greedy selection of points.


New Methods for Detecting Concentric Objects With High Accuracy

Al-Sharadqah, Ali A., Rull, Lorenzo

arXiv.org Machine Learning

Fitting concentric geometric objects to digitized data is an important problem in many areas such as iris detection, autonomous navigation, and industrial robotics operations. There are two common approaches to fitting geometric shapes to data: the geometric (iterative) approach and algebraic (non-iterative) approach. The geometric approach is a nonlinear iterative method that minimizes the sum of the squares of Euclidean distances of the observed points to the ellipses and regarded as the most accurate method, but it needs a good initial guess to improve the convergence rate. The algebraic approach is based on minimizing the algebraic distances with some constraints imposed on parametric space. Each algebraic method depends on the imposed constraint, and it can be solved with the aid of the generalized eigenvalue problem. Only a few methods in literature were developed to solve the problem of concentric ellipses. Here we study the statistical properties of existing methods by firstly establishing a general mathematical and statistical framework for this problem. Using rigorous perturbation analysis, we derive the variances and biasedness of each method under the small-sigma model. We also develop new estimators, which can be used as reliable initial guesses for other iterative methods. Then we compare the performance of each method according to their theoretical accuracy. Not only do our methods described here outperform other existing non-iterative methods, they are also quite robust against large noise. These methods and their practical performances are assessed by a series of numerical experiments on both synthetic and real data.