HYDRA: Pruning Adversarially Robust Neural Networks
–Neural Information Processing Systems
In safety-critical but computationally resource-constrained applications, deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size (often millions of parameters). While the research community has extensively explored the use of robust training and network pruning independently to address one of these challenges, only a few recent works have studied them jointly. However, these works inherit a heuristic pruning strategy that was developed for benign training, which performs poorly when integrated with robust training techniques, including adversarial training and verifiable robust training. To overcome this challenge, we propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune. We realize this insight by formulating the pruning objective as an empirical risk minimization problem which is solved efficiently using SGD.
Neural Information Processing Systems
Mar-21-2025, 09:46:31 GMT
- Country:
- North America (0.46)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Government > Military (0.66)
- Technology: