SDQ: Sparse Decomposed Quantization for LLM Inference
Jeong, Geonhwa, Tsai, Po-An, Keckler, Stephen W., Krishna, Tushar
–arXiv.org Artificial Intelligence
Recently, large language models (LLMs) have shown surprising performance in task-specific workloads as well as general tasks with the given prompts. However, to achieve unprecedented performance, recent LLMs use billions to trillions of parameters, which hinder the wide adaptation of those models due to their extremely large compute and memory requirements. To resolve the issue, various model compression methods are being actively investigated. In this work, we propose SDQ (Sparse Decomposed Quantization) to exploit both structured sparsity and quantization to achieve both high compute and memory efficiency. From our evaluations, we observe that SDQ can achieve 4 effective compute throughput with <1% quality drop. Previous efforts (Hoefler et al., 2021) have shown how to compress classic DNNs by more Large Language Models (LLMs) (Brown et al., 2020; than 90% (10 computation reduction) with limited loss of Chowdhery et al., 2022; Touvron et al., 2023b) with billions model quality; however, when it comes to LLMs, compressing or trillions of parameters have gained extensive attention as beyond 50% (2 computation reduction) with a limited they show promising quality in various domains.
arXiv.org Artificial Intelligence
Jun-19-2024