AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks

Kundu, Abhisek, Mellempudi, Naveen K., Vooturi, Dharma Teja, Kaul, Bharat, Dubey, Pradeep

arXiv.org Artificial Intelligence 

Sparse training is emerging as a promising avenue for reducing the computational cost of training neural networks. Several recent studies have proposed pruning methods using learnable thresholds to efficiently explore the non-uniform distribution of sparsity inherent within the models. In this paper, we propose Gradient Annealing (GA), where gradients of masked weights are scaled down in a non-linear manner. GA provides an elegant trade-off between sparsity and accuracy without the need for additional sparsity-inducing regularization. We integrated GA with the latest learnable pruning methods to create an automated sparse training algorithm called AutoSparse, which achieves better accuracy and/or training/inference FLOPS reduction than existing learnable pruning methods for sparse ResNet50 and MobileNetV1 on ImageNet-1K: AutoSparse achieves (2, 7) reduction in (training,inference) FLOPS for ResNet50 on ImageNet at 80% sparsity. Deep learning models (DNNs) have emerged as the preferred solution for many important problems in the domains of computer vision, language modeling, recommendation systems and reinforcement learning. Models have grown larger and more complex over the years, as they are applied to increasingly difficult problems on ever-growing datasets. In addition, DNNs are designed to operate in overparameterized regime Arora et al. (2018); Belkin et al. (2019); Ardalani et al. (2019) to facilitate easier optimization using gradient descent methods. Consequently, computational costs (memory and floating point operations FLOPS) of performing training and inference tasks on state-of-the-art (SotA) models has been growing at an exponential rate (Amodei & Hernandez). Two major techniques to make DNNs more efficient are (1) reduced-precision Micikevicius et al. (2017); Wang et al. (2018); Sun et al. (2019); Das et al. (2018); Kalamkar et al. (2019); Mellempudi et al. (2020), and (2) sparse representation. Today, SotA training hardware consists of significantly more reduced-precision FLOPS compared to traditional FP32 computations, while support for structured sparsity is also emerging Mishra et al. (2021).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found