GOVERN: Gradient Orientation Vote Ensemble for Multi-Teacher Reinforced Distillation
Zhou, Wenjie, Ding, Zhenxin, Zhang, Xiaodong, Shi, Haibo, Wang, Junfeng, Yin, Dawei
–arXiv.org Artificial Intelligence
Pre-trained language models have become an integral component of question-answering systems, achieving remarkable performance. For practical deployment, it is critical to carry out knowledge distillation to preserve high performance under computational constraints. In this paper, we address a key question: given the importance of unsupervised distillation for student performance, how does one effectively ensemble knowledge from multiple teachers at this stage without the guidance of ground-truth labels? We propose a novel algorithm, GOVERN, to tackle this issue. GOVERN has demonstrated significant improvements in both offline and online experiments. The proposed algorithm has been successfully deployed in a real-world commercial question-answering system.
arXiv.org Artificial Intelligence
May-6-2024
- Country:
- Asia
- China (0.28)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America (0.68)
- Asia
- Genre:
- Research Report > New Finding (0.49)
- Industry:
- Education (0.66)
- Technology: