Effective and Efficient Mixed Precision Quantization of Speech Foundation Models
Xu, Haoning, Li, Zhaoqing, Jin, Zengrui, Wang, Huimeng, Chen, Youjun, Li, Guinan, Geng, Mengzhe, Hu, Shujie, Deng, Jiajun, Liu, Xunying
–arXiv.org Artificial Intelligence
This paper presents a novel mixed-precision quantization approach for speech foundation models that tightly integrates mixed-precision learning and quantized model parameter estimation into one single model compression stage. Experiments conducted on LibriSpeech dataset with fine-tuned wav2vec2.0-base and HuBERT-large models suggest the resulting mixed-precision quantized models increased the lossless compression ratio by factors up to 1.7x and 1.9x over the respective uniform-precision and two-stage mixed-precision quantized baselines that perform precision learning and model parameters quantization in separate and disjointed stages, while incurring no statistically word error rate (WER) increase over the 32-bit full-precision models. The system compression time of wav2vec2.0-base and HuBERT-large models is reduced by up to 1.9 and 1.5 times over the two-stage mixed-precision baselines, while both produce lower WERs. The best-performing 3.5-bit mixed-precision quantized HuBERT-large model produces a lossless compression ratio of 8.6x over the 32-bit full-precision system.
arXiv.org Artificial Intelligence
Jan-11-2025
- Genre:
- Research Report (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks (0.47)
- Statistical Learning (0.35)
- Natural Language (0.93)
- Speech > Speech Recognition (0.31)
- Machine Learning
- Information Technology > Artificial Intelligence