Goto

Collaborating Authors

 Kundu, Abhisek


AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks

arXiv.org Artificial Intelligence

Sparse training is emerging as a promising avenue for reducing the computational cost of training neural networks. Several recent studies have proposed pruning methods using learnable thresholds to efficiently explore the non-uniform distribution of sparsity inherent within the models. In this paper, we propose Gradient Annealing (GA), where gradients of masked weights are scaled down in a non-linear manner. GA provides an elegant trade-off between sparsity and accuracy without the need for additional sparsity-inducing regularization. We integrated GA with the latest learnable pruning methods to create an automated sparse training algorithm called AutoSparse, which achieves better accuracy and/or training/inference FLOPS reduction than existing learnable pruning methods for sparse ResNet50 and MobileNetV1 on ImageNet-1K: AutoSparse achieves (2, 7) reduction in (training,inference) FLOPS for ResNet50 on ImageNet at 80% sparsity. Deep learning models (DNNs) have emerged as the preferred solution for many important problems in the domains of computer vision, language modeling, recommendation systems and reinforcement learning. Models have grown larger and more complex over the years, as they are applied to increasingly difficult problems on ever-growing datasets. In addition, DNNs are designed to operate in overparameterized regime Arora et al. (2018); Belkin et al. (2019); Ardalani et al. (2019) to facilitate easier optimization using gradient descent methods. Consequently, computational costs (memory and floating point operations FLOPS) of performing training and inference tasks on state-of-the-art (SotA) models has been growing at an exponential rate (Amodei & Hernandez). Two major techniques to make DNNs more efficient are (1) reduced-precision Micikevicius et al. (2017); Wang et al. (2018); Sun et al. (2019); Das et al. (2018); Kalamkar et al. (2019); Mellempudi et al. (2020), and (2) sparse representation. Today, SotA training hardware consists of significantly more reduced-precision FLOPS compared to traditional FP32 computations, while support for structured sparsity is also emerging Mishra et al. (2021).


Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads

arXiv.org Artificial Intelligence

During the past decade, novel Deep Learning (DL) algorithms/workloads and hardware have been developed to tackle a wide range of problems. Despite the advances in workload/hardware ecosystems, the programming methodology of DL-systems is stagnant. DL-workloads leverage either highly-optimized, yet platform-specific and inflexible kernels from DL-libraries, or in the case of novel operators, reference implementations are built via DL-framework primitives with underwhelming performance. This work introduces the Tensor Processing Primitives (TPP), a programming abstraction striving for efficient, portable implementation of DL-workloads with high-productivity. TPPs define a compact, yet versatile set of 2D-tensor operators (or a virtual Tensor ISA), which subsequently can be utilized as building-blocks to construct complex operators on high-dimensional tensors. The TPP specification is platform-agnostic, thus code expressed via TPPs is portable, whereas the TPP implementation is highly-optimized and platform-specific. We demonstrate the efficacy of our approach using standalone kernels and end-to-end DL-workloads expressed entirely via TPPs that outperform state-of-the-art implementations on multiple platforms.


K-TanH: Hardware Efficient Activations For Deep Learning

arXiv.org Machine Learning

We propose K-TanH, a novel, highly accurate, hardware efficient approximation of popular activation function Tanh for Deep Learning. K-TanH consists of a sequence of parameterized bit/integer operations, such as, masking, shift and add/subtract (no floating point operation needed) where parameters are stored in a very small look-up table. The design of K-TanH is flexible enough to deal with multiple numerical formats, such as, FP32 and BFloat16. High quality approximations to other activation functions, e.g., Swish and GELU, can be derived from K-TanH. We provide RTL design for K-TanH to demonstrate its area/power/performance efficacy. It is more accurate than existing piecewise approximations for Tanh. For example, K-TanH achieves $\sim 5\times$ speed up and $> 6\times$ reduction in maximum approximation error over software implementation of Hard TanH. Experimental results for low-precision BFloat16 training of language translation model GNMT on WMT16 data sets with approximate Tanh and Sigmoid obtained via K-TanH achieve similar accuracy and convergence as training with exact Tanh and Sigmoid.


A Study of BFLOAT16 for Deep Learning Training

arXiv.org Machine Learning

This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to/from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754 compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed precision training, and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.


A Randomized Rounding Algorithm for Sparse PCA

arXiv.org Machine Learning

We present and analyze a simple, two-step algorithm to approximate the optimal solution of the sparse PCA problem. Our approach first solves a L1 penalized version of the NP-hard sparse PCA optimization problem and then uses a randomized rounding strategy to sparsify the resulting dense solution. Our main theoretical result guarantees an additive error approximation and provides a tradeoff between sparsity and accuracy. Our experimental evaluation indicates that our approach is competitive in practice, even compared to state-of-the-art toolboxes such as Spasm.


Relaxed Leverage Sampling for Low-rank Matrix Completion

arXiv.org Machine Learning

We consider the problem of exact recovery of any $m\times n$ matrix of rank $\varrho$ from a small number of observed entries via the standard nuclear norm minimization framework. Such low-rank matrices have degrees of freedom $(m+n)\varrho - \varrho^2$. We show that any arbitrary low-rank matrices can be recovered exactly from a $\Theta\left(((m+n)\varrho - \varrho^2)\log^2(m+n)\right)$ randomly sampled entries, thus matching the lower bound on the required number of entries (in terms of degrees of freedom), with an additional factor of $O(\log^2(m+n))$. To achieve this bound on sample size we observe each entry with probabilities proportional to the sum of corresponding row and column leverage scores, minus their product. We show that this relaxation in sampling probabilities (as opposed to sum of leverage scores in Chen et al, 2014) can give us an $O(\varrho^2\log^2(m+n))$ additive improvement on the (best known) sample size obtained by Chen et al, 2014, for the nuclear norm minimization. Experiments on real data corroborate the theoretical improvement on sample size. Further, exact recovery of $(a)$ incoherent matrices (with restricted leverage scores), and $(b)$ matrices with only one of the row or column spaces to be incoherent, can be performed using our relaxed leverage score sampling, via nuclear norm minimization, without knowing the leverage scores a priori. In such settings also we can achieve improvement on sample size.


Approximating Sparse PCA from Incomplete Data

arXiv.org Machine Learning

We study how well one can recover sparse principal components of a data matrix using a sketch formed from a few of its elements. We show that for a wide class of optimization problems, if the sketch is close (in the spectral norm) to the original data matrix, then one can recover a near optimal solution to the optimization problem by using the sketch. In particular, we use this approach to obtain sparse principal components and show that for \math{m} data points in \math{n} dimensions, \math{O(\epsilon^{-2}\tilde k\max\{m,n\})} elements gives an \math{\epsilon}-additive approximation to the sparse PCA problem (\math{\tilde k} is the stable rank of the data matrix). We demonstrate our algorithms extensively on image, text, biological and financial data. The results show that not only are we able to recover the sparse PCAs from the incomplete data, but by using our sparse sketch, the running time drops by a factor of five or more.


Recovering PCA from Hybrid-$(\ell_1,\ell_2)$ Sparse Sampling of Data Elements

arXiv.org Machine Learning

This paper addresses how well we can recover a data matrix when only given a few of its elements. We present a randomized algorithm that element-wise sparsifies the data, retaining only a few its elements. Our new algorithm independently samples the data using sampling probabilities that depend on both the squares ($\ell_2$ sampling) and absolute values ($\ell_1$ sampling) of the entries. We prove that the hybrid algorithm recovers a near-PCA reconstruction of the data from a sublinear sample-size: hybrid-($\ell_1,\ell_2$) inherits the $\ell_2$-ability to sample the important elements as well as the regularization properties of $\ell_1$ sampling, and gives strictly better performance than either $\ell_1$ or $\ell_2$ on their own. We also give a one-pass version of our algorithm and show experiments to corroborate the theory.