Multi-teacher Distillation for Multilingual Spelling Correction
Zhang, Jingfen, Guo, Xuan, Bodapati, Sravan, Potts, Christopher
–arXiv.org Artificial Intelligence
Accurate spelling correction is a critical step in modern search interfaces, especially in an era of mobile devices and speech-to-text interfaces. For services that are deployed around the world, this poses a significant challenge for multilingual NLP: spelling errors need to be caught and corrected in all languages, and even in queries that use multiple languages. In this paper, we tackle this challenge using multi-teacher distillation. On our approach, a monolingual teacher model is trained for each language/locale, and these individual models are distilled into a single multilingual student model intended to serve all languages/locales. In experiments using open-source data as well as user data from a worldwide search service, we show that this leads to highly effective spelling correction models that can meet the tight latency requirements of deployed services.
arXiv.org Artificial Intelligence
Nov-19-2023
- Country:
- Asia > Middle East (0.28)
- Europe (1.00)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Education (0.93)
- Technology: