Neural Networks Trained to Solve Differential Equations Learn General Representations

Magill, Martin, Qureshi, Faisal, Haan, Hendrick de

Neural Information Processing Systems 

We introduce a technique based on the singular vector canonical correlation analysis (SVCCA) for measuring the generality of neural network layers across a continuously-parametrized set of tasks. We illustrate this method by studying generality in neural networks trained to solve parametrized boundary value problems based on the Poisson partial differential equation. We find that the first hidden layers are general, and that they learn generalized coordinates over the input domain. Deeper layers are successively more specific. We find excellent agreement between the two methods, and note that our method is much faster, particularly for continuously-parametrized problems.