Sparse GPU Kernels for Deep Learning

Gale, Trevor, Zaharia, Matei, Young, Cliff, Elsen, Erich

arXiv.org Machine Learning 

Abstract--Scientific workloads have traditionally exploited high levels of sparsity to accelerate computation and reduce memory requirements. While deep neural networks can be made sparse, achieving practical speedups on GPUs is difficult because these applications have relatively moderate levels of sparsity that are not sufficient for existing sparse kernels to outperform their dense counterparts. In this work, we study sparse matrices from deep learning applications and identify favorable properties that can be exploited to accelerate computation. Based on these insights, we develop high-performance GPU kernels for two sparse matrix operations widely applicable in neural networks: sparse matrix-dense matrix multiplication and sampled dense-dense matrix multiplication. Using our kernels, we demonstrate sparse Transformer and MobileNet models that achieve 1.2-2.1 speedups and up to 12.8 memory savings without sacrificing accuracy. This work enables speedups for all problems in the highlighted region. Existing GPU kernels for sparse linear algebra are procedure, a sparsification algorithm is applied to produce a primarily optimized for scientific applications, where matrices neural network where a high fraction of the weights are zerovalued are extremely (99%) sparse. The weight matrices can then be stored in levels of sparsity found in deep neural networks, these kernels a compressed format, and sparse linear algebra kernels can be are not able to outperform their dense counterparts. In the context of generative To address this issue, structure can be enforced on the models, sparsity has been applied to reduce the computational topology of nonzeros such that nonzero values are grouped requirements of self-attention in Transformer architectures [6], into blocks [12]-[14]. While this approach is able to recover [10], [11].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found