Goto

Collaborating Authors

 mad-lm




Appendixes A An Example for Scenario 2 We give an example of G(A)

Neural Information Processing Systems

Below is a detailed explanation of the comparative methods covered in the paper. The network architecture of PI-DeepONet used for Burgers' equation is such that both In order to solve the Eq. Fig.6 shows model predictions of MAD-L and MAD-LM compared with the reference solutions under Fig.7(a) shows that the accuracy of MAD-L after convergence increases with Fig.7(b) shows that the accuracy and convergence speed of MAD-LM do not change For Burgers' equation, we also consider the scenario when the viscosity coefficients Fig.8 compares the convergence curves of mean MAD-LM has obvious speed and accuracy improvement over From-Scratch and Transfer-Learning . We investigated the effect of the dimension of the latent vector (latent size) in Burgers' equation on performance. As can be seen from Fig.9(a), for MAD-L, different latent sizes have different performances and the best performance is achieved when it is equal to 128.



Meta-Auto-Decoder for Solving Parametric Partial Differential Equations

Huang, Xiang, Ye, Zhanhong, Liu, Hongsheng, Shi, Beiji, Wang, Zidong, Yang, Kang, Li, Yang, Weng, Bingya, Wang, Min, Chu, Haotian, Zhou, Jing, Yu, Fan, Hua, Bei, Chen, Lei, Dong, Bin

arXiv.org Artificial Intelligence

Partial Differential Equations (PDEs) are ubiquitous in many disciplines of science and engineering and notoriously difficult to solve. In general, closed-form solutions of PDEs are unavailable and numerical approximation methods are computationally expensive. The parameters of PDEs are variable in many applications, such as inverse problems, control and optimization, risk assessment, and uncertainty quantification. In these applications, our goal is to solve parametric PDEs rather than one instance of them. Our proposed approach, called Meta-Auto-Decoder (MAD), treats solving parametric PDEs as a meta-learning problem and utilizes the Auto-Decoder structure in \cite{park2019deepsdf} to deal with different tasks/PDEs. Physics-informed losses induced from the PDE governing equations and boundary conditions is used as the training losses for different tasks. The goal of MAD is to learn a good model initialization that can generalize across different tasks, and eventually enables the unseen task to be learned faster. The inspiration of MAD comes from (conjectured) low-dimensional structure of parametric PDE solutions and we explain our approach from the perspective of manifold learning. Finally, we demonstrate the power of MAD though extensive numerical studies, including Burgers' equation, Laplace's equation and time-domain Maxwell's equations. MAD exhibits faster convergence speed without losing the accuracy compared with other deep learning methods.