Provable Bounds on the Hessian of Neural Networks: Derivative-Preserving Reachability Analysis
Sharifi, Sina, Fazlyab, Mahyar
–arXiv.org Artificial Intelligence
We propose a novel reachability analysis method tailored for neural networks with differentiable activations. Our idea hinges on a sound abstraction of the neural network map based on first-order Taylor expansion and bounding the remainder. To this end, we propose a method to compute analytical bounds on the network's first derivative (gradient) and second derivative (Hessian). A key aspect of our method is loop transformation on the activation functions to exploit their monotonicity effectively. The resulting end-to-end abstraction locally preserves the derivative information, yielding accurate bounds on small input sets. Finally, we employ a branch and bound framework for larger input sets to refine the abstraction recursively. We evaluate our method numerically via different examples and compare the results with relevant state-of-the-art methods.
arXiv.org Artificial Intelligence
Jun-6-2024
- Country:
- Europe > Germany (0.14)
- North America > United States (0.14)
- Genre:
- Research Report > Promising Solution (0.34)
- Industry:
- Energy (0.46)
- Technology: