Navigating Extremes: Dynamic Sparsity in Large Output Spaces 1 Mike Lasby
–Neural Information Processing Systems
In recent years, Dynamic Sparse Training (DST) has emerged as an alternative to posttraining pruning for generating efficient models. In principle, DST allows for a more memory efficient training process, as it maintains sparsity throughout the entire training run. However, current DST implementations fail to capitalize on this in practice. Because sparse matrix multiplication is much less efficient than dense matrix multiplication on GPUs, most implementations simulate sparsity by masking weights. In this paper, we leverage recent advances in semi-structured sparse training to apply DST in the domain of classification with large output spaces, where memory-efficiency is paramount.
Neural Information Processing Systems
Mar-27-2025, 09:40:25 GMT