Goto

Collaborating Authors

 Bouklas, Nikolaos


Condensed Stein Variational Gradient Descent for Uncertainty Quantification of Neural Networks

arXiv.org Machine Learning

In the context of uncertainty quantification (UQ) the curse of dimensionality, whereby quantification efficiency degrades drastistically with parameter dimension, is particular extreme with highly parameterized models such as neural networks (NNs). Fortunately, in many cases, these models are overparameterized in the sense that the number of parameters can be reduced with negligible effects on accuracy and sometimes improvements in generalization [1]. Furthermore, NNs often have parameterizations that have fungible parameters such that permutations of the values and connections lead to equivalent output responses. This suggests methods that simultaneously sparsify and characterize the uncertainty of a model, while handling and taking advantage of the symmetries inherent in the model, are potentially advantageous approaches. Although Markov chain Monte Carlo (MCMC) methods [2] have been the reference standard to generate samples for UQ methods, they can be temperamental and do not scale well for high dimensional models. More recently, there has been widespread use of variational inference methods (VI), which cast the parameter posterior sampling problem as an optimization of a surrogate posterior guided by a suitable objective, such as the Kullback-Liebler (KL) divergence between the predictive posterior and true posterior induced by the data. In particular, there is now a family of model ensemble methods based on Stein's identity [3], such as Stein variational gradient descent (SVGD) [4], projected SVGD [5], and Stein variational Newton's method [6]. These methods have advantages over MCMC methods by virtue of propagating in parallel a coordinated ensemble of particles that represent the empirical posterior.


A review on data-driven constitutive laws for solids

arXiv.org Artificial Intelligence

This review article highlights state-of-the-art data-driven techniques to discover, encode, surrogate, or emulate constitutive laws that describe the path-independent and path-dependent response of solids. Our objective is to provide an organized taxonomy to a large spectrum of methodologies developed in the past decades and to discuss the benefits and drawbacks of the various techniques for interpreting and forecasting mechanics behavior across different scales. Distinguishing between machine-learning-based and model-free methods, we further categorize approaches based on their interpretability and on their learning process/type of required data, while discussing the key problems of generalization and trustworthiness. We attempt to provide a road map of how these can be reconciled in a data-availability-aware context. We also touch upon relevant aspects such as data sampling techniques, design of experiments, verification, and validation.


Extreme sparsification of physics-augmented neural networks for interpretable model discovery in mechanics

arXiv.org Artificial Intelligence

Data-driven constitutive modeling with neural networks has received increased interest in recent years due to its ability to easily incorporate physical and mechanistic constraints and to overcome the challenging and time-consuming task of formulating phenomenological constitutive laws that can accurately capture the observed material response. However, even though neural network-based constitutive laws have been shown to generalize proficiently, the generated representations are not easily interpretable due to their high number of trainable parameters. Sparse regression approaches exist that allow to obtaining interpretable expressions, but the user is tasked with creating a library of model forms which by construction limits their expressiveness to the functional forms provided in the libraries. In this work, we propose to train regularized physics-augmented neural network-based constitutive models utilizing a smoothed version of $L^{0}$-regularization. This aims to maintain the trustworthiness inherited by the physical constraints, but also enables interpretability which has not been possible thus far on any type of machine learning-based constitutive model where model forms were not assumed a-priory but were actually discovered. During the training process, the network simultaneously fits the training data and penalizes the number of active parameters, while also ensuring constitutive constraints such as thermodynamic consistency. We show that the method can reliably obtain interpretable and trustworthy constitutive models for compressible and incompressible hyperelasticity, yield functions, and hardening models for elastoplasticity, for synthetic and experimental data.


Stress representations for tensor basis neural networks: alternative formulations to Finger-Rivlin-Ericksen

arXiv.org Artificial Intelligence

Data-driven constitutive modeling frameworks based on neural networks and classical representation theorems have recently gained considerable attention due to their ability to easily incorporate constitutive constraints and their excellent generalization performance. In these models, the stress prediction follows from a linear combination of invariant-dependent coefficient functions and known tensor basis generators. However, thus far the formulations have been limited to stress representations based on the classical Rivlin and Ericksen form, while the performance of alternative representations has yet to be investigated. In this work, we survey a variety of tensor basis neural network models for modeling hyperelastic materials in a finite deformation context, including a number of so far unexplored formulations which use theoretically equivalent invariants and generators to Finger-Rivlin-Ericksen. Furthermore, we compare potential-based and coefficient-based approaches, as well as different calibration techniques. Nine variants are tested against both noisy and noiseless datasets for three different materials. Theoretical and practical insights into the performance of each formulation are given.


Modular machine learning-based elastoplasticity: generalization in the context of limited data

arXiv.org Artificial Intelligence

The development of accurate constitutive models for materials that undergo path-dependent processes continues to be a complex challenge in computational solid mechanics. Challenges arise both in considering the appropriate model assumptions and from the viewpoint of data availability, verification, and validation. Recently, data-driven modeling approaches have been proposed that aim to establish stress-evolution laws that avoid user-chosen functional forms by relying on machine learning representations and algorithms. However, these approaches not only require a significant amount of data but also need data that probes the full stress space with a variety of complex loading paths. Furthermore, they rarely enforce all necessary thermodynamic principles as hard constraints. Hence, they are in particular not suitable for low-data or limited-data regimes, where the first arises from the cost of obtaining the data and the latter from the experimental limitations of obtaining labeled data, which is commonly the case in engineering applications. In this work, we discuss a hybrid framework that can work on a variable amount of data by relying on the modularity of the elastoplasticity formulation where each component of the model can be chosen to be either a classical phenomenological or a data-driven model depending on the amount of available information and the complexity of the response. The method is tested on synthetic uniaxial data coming from simulations as well as cyclic experimental data for structural materials. The discovered material models are found to not only interpolate well but also allow for accurate extrapolation in a thermodynamically consistent manner far outside the domain of the training data. Training aspects and details of the implementation of these models into Finite Element simulations are discussed and analyzed.


Reduced order modeling for flow and transport problems with Barlow Twins self-supervised learning

arXiv.org Artificial Intelligence

We propose a unified data-driven reduced order model (ROM) that bridges the performance gap between linear and nonlinear manifold approaches. Deep learning ROM (DL-ROM) using deep-convolutional autoencoders (DC-AE) has been shown to capture nonlinear solution manifolds but fails to perform adequately when linear subspace approaches such as proper orthogonal decomposition (POD) would be optimal. Besides, most DL-ROM models rely on convolutional layers, which might limit its application to only a structured mesh. The proposed framework in this study relies on the combination of an autoencoder (AE) and Barlow Twins (BT) self-supervised learning, where BT maximizes the information content of the embedding with the latent space through a joint embedding architecture. Through a series of benchmark problems of natural convection in porous media, BT-AE performs better than the previous DL-ROM framework by providing comparable results to POD-based approaches for problems where the solution lies within a linear subspace as well as DL-ROM autoencoder-based techniques where the solution lies on a nonlinear manifold; consequently, bridges the gap between linear and nonlinear reduced manifolds. We illustrate that a proficient construction of the latent space is key to achieving these results, enabling us to map these latent spaces using regression models. The proposed framework achieves a relative error of 2% on average and 12% in the worst-case scenario (i.e., the training data is small, but the parameter space is large.).


Local approximate Gaussian process regression for data-driven constitutive laws: Development and comparison with neural networks

arXiv.org Artificial Intelligence

Hierarchical computational methods for multiscale mechanics such as the FE$^2$ and FE-FFT methods are generally accompanied by high computational costs. Data-driven approaches are able to speed the process up significantly by enabling to incorporate the effective micromechanical response in macroscale simulations without the need of performing additional computations at each Gauss point explicitly. Traditionally artificial neural networks (ANNs) have been the surrogate modeling technique of choice in the solid mechanics community. However they suffer from severe drawbacks due to their parametric nature and suboptimal training and inference properties for the investigated datasets in a three dimensional setting. These problems can be avoided using local approximate Gaussian process regression (laGPR). This method can allow the prediction of stress outputs at particular strain space locations by training local regression models based on Gaussian processes, using only a subset of the data for each local model, offering better and more reliable accuracy than ANNs. A modified Newton-Raphson approach is proposed to accommodate for the local nature of the laGPR approximation when solving the global structural problem in a FE setting. Hence, the presented work offers a complete and general framework enabling multiscale calculations combining a data-driven constitutive prediction using laGPR, and macroscopic calculations using an FE scheme that we test for finite-strain three-dimensional hyperelastic problems.


Model-data-driven constitutive responses: application to a multiscale computational framework

arXiv.org Machine Learning

Computational multiscale methods for analyzing and deriving constitutive responses have been used as a tool in engineering problems because of their ability to combine information at different length scales. However, their application in a nonlinear framework can be limited by high computational costs, numerical difficulties, and/or inaccuracies. In this paper, a hybrid methodology is presented which combines classical constitutive laws (model-based), a data-driven correction component, and computational multiscale approaches. A model-based material representation is locally improved with data from lower scales obtained by means of a nonlinear numerical homogenization procedure leading to a model-data-driven approach. Therefore, macroscale simulations explicitly incorporate the true microscale response, maintaining the same level of accuracy that would be obtained with online micro-macro simulations but with a computational cost comparable to classical model-driven approaches. In the proposed approach, both model and data play a fundamental role allowing for the synergistic integration between a physics-based response and a machine learning black-box. Numerical applications are implemented in two dimensions for different tests investigating both material and structural responses in large deformation.