Compression-aware Training of Neural Networks using Frank-Wolfe
Zimmer, Max, Spiegel, Christoph, Pokutta, Sebastian
–arXiv.org Artificial Intelligence
Many existing Neural Network pruning approaches rely on either retraining or inducing a strong bias in order to converge to a sparse solution throughout training. A third paradigm, 'compression-aware' training, aims to obtain state-of-the-art dense models that are robust to a wide range of compression ratios using a single dense training run while also avoiding retraining. We propose a framework centered around a versatile family of norm constraints and the Stochastic Frank-Wolfe (SFW) algorithm that encourage convergence to well-performing solutions while inducing robustness towards convolutional filter pruning and low-rank matrix decomposition. Our method is able to outperform existing compression-aware approaches and, in the case of low-rank matrix decomposition, it also requires significantly less computational resources than approaches based on nuclear-norm regularization. Our findings indicate that dynamically adjusting the learning rate of SFW, as suggested by Pokutta et al. (2020), is crucial for convergence and robustness of SFW-trained models and we establish a theoretical foundation for that practice.
arXiv.org Artificial Intelligence
Feb-14-2024
- Country:
- Europe (0.92)
- North America > United States
- Colorado (0.14)
- Genre:
- Research Report > New Finding (0.48)
- Technology: