On the Correctness of Automatic Differentiation for Neural Networks with Machine-Representable Parameters
Lee, Wonyeol, Park, Sejun, Aiken, Alex
–arXiv.org Artificial Intelligence
Recent work has shown that forward- and reverse- mode automatic differentiation (AD) over the reals is almost always correct in a mathematically precise sense. However, actual programs work with machine-representable numbers (e.g., floating-point numbers), not reals. In this paper, we study the correctness of AD when the parameter space of a neural network consists solely of machine-representable numbers. In particular, we analyze two sets of parameters on which AD can be incorrect: the incorrect set on which the network is differentiable but AD does not compute its derivative, and the non-differentiable set on which the network is non-differentiable. For a neural network with bias parameters, we first prove that the incorrect set is always empty. We then prove a tight bound on the size of the non-differentiable set, which is linear in the number of non-differentiabilities in activation functions, and give a simple necessary and sufficient condition for a parameter to be in this set. We further prove that AD always computes a Clarke subderivative even on the non-differentiable set. We also extend these results to neural networks possibly without bias parameters.
arXiv.org Artificial Intelligence
Jun-6-2023
- Country:
- North America > United States (0.67)
- Genre:
- Research Report (0.64)
- Industry:
- Energy (0.45)
- Technology: