generality
Neural Networks Trained to Solve Differential Equations Learn General Representations
We introduce a technique based on the singular vector canonical correlation analysis (SVCCA) for measuring the generality of neural network layers across a continuously-parametrized set of tasks. We illustrate this method by studying generality in neural networks trained to solve parametrized boundary value problems based on the Poisson partial differential equation. We find that the first hidden layers are general, and that they learn generalized coordinates over the input domain. Deeper layers are successively more specific.
- North America > Canada > Ontario (0.05)
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > California > Orange County > Irvine (0.04)
We would like to emphasize that Theorem 1 is the most important contribution of our paper due to its generality
We thank the reviewers for their insightful feedback, and we appreciate the opportunity to improve our paper. We would like to emphasize that Theorem 1 is the most important contribution of our paper due to its generality. In the Gaussian case, our sample complexity result follows directly from the expression for the optimal loss. Finally, while Dohmatob's bounds become non-trivial only when the adversarial We will also add a clearer description of the "translate and pair in place" coupling. Comparisons with Sinha et al. are in Section 7 and we compare to Dohmatob above.