Targeted Error Correction in Knowledge Distillation: Small Language Models Surpass GPT
Lee, Hee-Jin, Guo, Zhen, Jin, Luchao, Goudarzi, Morteza Moazami
–arXiv.org Artificial Intelligence
We introduce an Analyze-Revise-Finetune (ARF) pipeline that enables smaller open-source language models (LLMs) to surpass substantially larger proprietary models in customer service summarization tasks. The pipeline first analyzes and categorizes common errors in summaries produced by a teacher model (GPT-3.5), then performs a targeted revision using a compact editor model (Llama 3.1 70B) to generate high-quality, refined training data. Fine-tuning a smaller student model (Llama 3.1 8B) on this refined data resulted in superior summarization performance compared to GPT-3.5. The ARF pipeline improves cost efficiency and data privacy while maintaining competitive accuracy, illustrating a generalizable framework for enhancing open-source LLMs across diverse downstream applications.
arXiv.org Artificial Intelligence
Nov-6-2025
- Country:
- Africa > Malawi (0.04)
- Asia
- North America > United States
- California > Santa Clara County > San Jose (0.04)
- Genre:
- Research Report (0.50)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: