A Scalable Approach for Safe and Robust Learning via Lipschitz-Constrained Networks
Abdeen, Zain ul, Kekatos, Vassilis, Jin, Ming
–arXiv.org Artificial Intelligence
Certified robustness is a critical property for deploying neural networks (NN) in safety-critical applications. A principle approach to achieving such guarantees is to constrain the global Lipschitz constant of the network. However, accurate methods for Lipschitz-constrained training often suffer from non-convex formulations and poor scalability due to reliance on global semidefinite programs (SDPs). In this letter, we propose a convex training framework that enforces global Lipschitz constraints via semidefinite relaxation. By reparameterizing the NN using loop transformation, we derive a convex admissibility condition that enables tractable and certifiable training. While the resulting formulation guarantees robustness, its scalability is limited by the size of global SDP. To overcome this, we develop a randomized subspace linear matrix inequalities (RS-LMI) approach that decomposes the global constraints into sketched layerwise constraints projected onto low-dimensional subspaces, yielding a smooth and memory-efficient training objective. Empirical results on MNIST, CIFAR-10, and ImageNet demonstrate that the proposed framework achieves competitive accuracy with significantly improved Lipschitz bounds and runtime performance.
arXiv.org Artificial Intelligence
Jul-1-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- Indiana > Tippecanoe County
- Lafayette (0.04)
- West Lafayette (0.04)
- Virginia (0.04)
- Indiana > Tippecanoe County
- Asia > Middle East
- Genre:
- Research Report (0.64)
- Technology: