SparseLLM: Towards Global Pruning of Pre-trained Language Models

Neural Information Processing Systems 

Pruning has emerged as a pivotal compression strategy, introducing sparsity to enhance both memory and computational efficiency.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found