Exploring Model Invariance with Discrete Search for Ultra-Low-Bit Quantization
Wen, Yuqiao, Cao, Yanshuai, Mou, Lili
–arXiv.org Artificial Intelligence
Large language models have been increasing in size due to their success in a wide range of applications. This calls for a pressing need to reduce memory usage to make them more accessible. Post-training quantization is a popular technique which uses fewer bits (e.g., 4--8 bits) to represent the model without retraining it. However, it remains a challenging task to perform quantization in an ultra-low-bit setup (e.g., 2 bits). In this paper, we propose InvarExplore, a unified framework that systematically explores different model invariance at the same time, allowing us to take advantage of the synergy between each type of invariance. Importantly, InvarExplore features a discrete search algorithm that enables us to explore permutation invariance, which is under-studied as it cannot be optimized with gradient-based methods. Results show that InvarExplore is compatible with existing state-of-the-art methods, achieving an add-on performance improvement over strong competing methods.
arXiv.org Artificial Intelligence
Feb-6-2025
- Country:
- North America > Canada > Alberta (0.14)
- Genre:
- Research Report
- New Finding (0.48)
- Promising Solution (0.34)
- Research Report
- Industry:
- Education (0.67)
- Technology: