Discovering interpretable elastoplasticity models via the neural polynomial method enabled symbolic regressions
Bahmani, Bahador, Suh, Hyoung Suk, Sun, WaiChing
–arXiv.org Artificial Intelligence
Conventional neural network elastoplasticity models are often perceived as lacking interpretability. This paper introduces a two-step machine learning approach that returns mathematical models interpretable by human experts. In particular, we introduce a surrogate model where yield surfaces are expressed in terms of a set of single-variable feature mappings obtained from supervised learning. A post-processing step is then used to re-interpret the set of single-variable neural network mapping functions into mathematical form through symbolic regression. This divide-and-conquer approach provides several important advantages. First, it enables us to overcome the scaling issue of symbolic regression algorithms. From a practical perspective, it enhances the portability of learned models for partial differential equation solvers written in different programming languages. Finally, it enables us to have a concrete understanding of the attributes of the materials, such as convexity and symmetries of models, through automated derivations and reasoning. Numerical examples have been provided, along with an open-source code to enable third-party validation.
arXiv.org Artificial Intelligence
Feb-1-2024
- Country:
- North America > United States > Illinois (0.14)
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Energy (0.92)
- Government
- Military (0.45)
- Regional Government (0.46)
- Technology: