KDLSQ-BERT: A Quantized Bert Combining Knowledge Distillation with Learned Step Size Quantization

Jin, Jing, Liang, Cai, Wu, Tiancheng, Zou, Liqin, Gan, Zhiliang

arXiv.org Artificial Intelligence 

Recently, transformer-based language models such as BERT have shown tremendous performance improvement for a range of natural language processing tasks. However, these language models usually are computation expensive and memory intensive during inference. As a result, it is difficult to deploy them on resourcerestricted devices. To improve the inference performance, as well as reduce the model size while maintaining the model accuracy, we propose a novel quantization method named KDLSQ-BERT that combines knowledge distillation (KD) with learned step size quantization (LSQ) for language model quantization. The main idea of our method is that the KD technique is leveraged to transfer the knowledge from a "teacher" model to a "student" model when exploiting LSQ to quantize that "student" model during the quantization training process. Extensive experiment results on GLUE benchmark and SQuAD demonstrate that our proposed KDLSQ-BERT not only performs effectively when doing different bit (e.g. Our code will be public available. Recently, transformer-based language models such as BERT Devlin et al. (2018) and RoBertaLiu et al. (2019) have achieved remarkable performance on many natural language processing tasks. However, it is difficult to deploy these models on resource-restricted devices directly since they usually contain lots of weight parameters that are computation expensive and memory intensive.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found