InformedNeural
–Neural Information Processing Systems
Additionally,though traditional PINNs (vanilla-PINNs) are typically stored andtrained in32-bit floating-point (fp32) ontheGPU, weshow that for DT-PINNs, using fp64 on the GPU leads to significantly faster training times than fp32 vanilla-PINNs with comparable accuracy. PINNscanbeusedboth to discover/infer PDEs that govern a given data set, and as direct PDE solvers.
Neural Information Processing Systems
Feb-7-2026, 07:54:42 GMT