Finite-time Lyapunov exponents of deep neural networks
Storm, L., Linander, H., Bec, J., Gustavsson, K., Mehlig, B.
–arXiv.org Artificial Intelligence
Université Côte d'Azur, Inria, CNRS, Cemef, Sophia-Antipolis, France, F-06900 We compute how small input perturbations affect the output of deep neural networks, exploring an analogy between deep networks and dynamical systems, where the growth or decay of local perturbations is characterised by finite-time Lyapunov exponents. We show that the maximal exponent forms geometrical structures in input space, akin to coherent structures in dynamical systems. Ridges of large positive exponents divide input space into different regions that the network associates with different classes. These ridges visualise the geometry that deep networks construct in input space, shedding light on the fundamental mechanisms underlying their learning capabilities. Deep neural networks can be trained to model complex function [8].
arXiv.org Artificial Intelligence
Jun-21-2023
- Country:
- Europe
- Denmark > Capital Region
- Copenhagen (0.04)
- France > Provence-Alpes-Côte d'Azur (0.24)
- Italy > Sardinia (0.04)
- Sweden > Vaestra Goetaland
- Gothenburg (0.05)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Denmark > Capital Region
- Europe
- Genre:
- Research Report (0.50)
- Technology: