Scalable Back-Propagation-Free Training of Optical Physics-Informed Neural Networks
Zhao, Yequan, Yu, Xinling, Xiao, Xian, Chen, Zhixiong, Liu, Ziyue, Kurczveil, Geza, Beausoleil, Raymond G., Liu, Sijia, Zhang, Zheng
–arXiv.org Artificial Intelligence
Physics-informed neural networks (PINNs) have shown promise in solving partial differential equations (PDEs), with growing interest in their energy-efficient, real-time training on edge devices. Photonic computing offers a potential solution to achieve this goal because of its ultra-high operation speed. However, the lack of photonic memory and the large device sizes prevent training real-size PINNs on photonic chips. This paper proposes a completely back-propagation-free (BP-free) and highly salable framework for training real-size PINNs on silicon photonic platforms. Our approach involves three key innovations: (1) a sparse-grid Stein derivative estimator to avoid the BP in the loss evaluation of a PINN, (2) a dimension-reduced zeroth-order optimization via tensor-train decomposition to achieve better scalability and convergence in BP-free training, and (3) a scalable on-chip photonic PINN training accelerator design using photonic tensor cores. We validate our numerical methods on both low- and high-dimensional PDE benchmarks. Through circuit simulation based on real device parameters, we further demonstrate the significant performance benefit (e.g., real-time training, huge chip area reduction) of our photonic accelerator.
arXiv.org Artificial Intelligence
Feb-17-2025
- Country:
- North America > United States > California (0.28)
- Genre:
- Research Report > Promising Solution (0.34)
- Industry:
- Banking & Finance (0.46)
- Energy > Oil & Gas (0.46)
- Information Technology (0.46)
- Technology: