Every Expert Matters: Towards Effective Knowledge Distillation for Mixture-of-Experts Language Models

Open in new window