Goto

Collaborating Authors

 Padmanabha, Govinda Anantha


Condensed Stein Variational Gradient Descent for Uncertainty Quantification of Neural Networks

arXiv.org Machine Learning

In the context of uncertainty quantification (UQ) the curse of dimensionality, whereby quantification efficiency degrades drastistically with parameter dimension, is particular extreme with highly parameterized models such as neural networks (NNs). Fortunately, in many cases, these models are overparameterized in the sense that the number of parameters can be reduced with negligible effects on accuracy and sometimes improvements in generalization [1]. Furthermore, NNs often have parameterizations that have fungible parameters such that permutations of the values and connections lead to equivalent output responses. This suggests methods that simultaneously sparsify and characterize the uncertainty of a model, while handling and taking advantage of the symmetries inherent in the model, are potentially advantageous approaches. Although Markov chain Monte Carlo (MCMC) methods [2] have been the reference standard to generate samples for UQ methods, they can be temperamental and do not scale well for high dimensional models. More recently, there has been widespread use of variational inference methods (VI), which cast the parameter posterior sampling problem as an optimization of a surrogate posterior guided by a suitable objective, such as the Kullback-Liebler (KL) divergence between the predictive posterior and true posterior induced by the data. In particular, there is now a family of model ensemble methods based on Stein's identity [3], such as Stein variational gradient descent (SVGD) [4], projected SVGD [5], and Stein variational Newton's method [6]. These methods have advantages over MCMC methods by virtue of propagating in parallel a coordinated ensemble of particles that represent the empirical posterior.


A review on data-driven constitutive laws for solids

arXiv.org Artificial Intelligence

This review article highlights state-of-the-art data-driven techniques to discover, encode, surrogate, or emulate constitutive laws that describe the path-independent and path-dependent response of solids. Our objective is to provide an organized taxonomy to a large spectrum of methodologies developed in the past decades and to discuss the benefits and drawbacks of the various techniques for interpreting and forecasting mechanics behavior across different scales. Distinguishing between machine-learning-based and model-free methods, we further categorize approaches based on their interpretability and on their learning process/type of required data, while discussing the key problems of generalization and trustworthiness. We attempt to provide a road map of how these can be reconciled in a data-availability-aware context. We also touch upon relevant aspects such as data sampling techniques, design of experiments, verification, and validation.