GOVERN: Gradient Orientation Vote Ensemble for Multi-Teacher Reinforced Distillation

Zhou, Wenjie, Ding, Zhenxin, Zhang, Xiaodong, Shi, Haibo, Wang, Junfeng, Yin, Dawei

arXiv.org Artificial Intelligence 

Pre-trained language models have become an integral component of question-answering systems, achieving remarkable performance. For practical deployment, it is critical to carry out knowledge distillation to preserve high performance under computational constraints. In this paper, we address a key question: given the importance of unsupervised distillation for student performance, how does one effectively ensemble knowledge from multiple teachers at this stage without the guidance of ground-truth labels? We propose a novel algorithm, GOVERN, to tackle this issue. GOVERN has demonstrated significant improvements in both offline and online experiments. The proposed algorithm has been successfully deployed in a real-world commercial question-answering system.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found