Lang-PINN: From Language to Physics-Informed Neural Networks via a Multi-Agent Framework

He, Xin, You, Liangliang, Tian, Hongduan, Han, Bo, Tsang, Ivor, Ong, Yew-Soon

arXiv.org Artificial Intelligence 

Physics-informed neural networks (PINNs)provide a powerful approach for solving partial differential equations (PDEs), but constructing a usable PINN remains labor-intensive and error-prone. Scientists must interpret problems as PDE formulations, design architectures and loss functions, and implement stable training pipelines. Existing large language model (LLM)approaches address isolated steps such as code generation or architecture suggestion, but typically assume a formal PDE is already specified and therefore lack an end-to-end perspective. We present Lang-PINN, an LLM-driven multi-agent system that builds trainable PINNs directly from natural language task descriptions. Lang-PINN coordinates four complementary agents: a PDE Agent that parses task descriptions into symbolic PDEs, a PINN Agent that selects architectures, a Code Agent that generates modular implementations, and a Feedback Agent that executes and diagnoses errors for iterative refinement. This design transforms informal task statements into executable and verifiable PINN code. Experiments show that Lang-PINN achieves substantially lower errors and greater robustness than competitive baselines: mean squared error (MSE)is reduced by up to 3-5 orders of magnitude, end-to-end execution success improves by more than 50%, and reduces time overhead by up to 74%. Partial differential equations (PDEs)are central to scientific computing, underpinning applications in physics, engineering, and materials science. Physics-informed neural networks (PINNs)(Raissi et al., 2019) have emerged as a flexible framework that embeds governing equations into trainable neural models, offering a unified approach for forward, inverse, and data-scarce problems (Karni-adakis et al., 2021; Lu et al., 2021). Although libraries and benchmarks such as DeepXDE (Lu et al., 2021), PINNacle (Hao et al., 2023), and PDEBench (Takamoto et al., 2022) have been developed, deploying a trainable PINN still requires expert-level manual effort in PDE specification, architecture design, and optimization tuning.