Learning Translation Quality Evaluation on Low Resource Languages from Large Language Models
Mohtashami, Amirkeivan, Verzetti, Mauro, Rubenstein, Paul K.
–arXiv.org Artificial Intelligence
Learned metrics such as BLEURT have in recent years become widely employed to evaluate the quality of machine translation systems. Training such metrics requires data which can be expensive and difficult to acquire, particularly for lowerresource languages. We show how knowledge can be distilled from Large Language Models (LLMs) to improve upon such learned metrics without requiring human annotators, by creating synthetic datasets which can be mixed into existing datasets, requiring only a corpus of text in the target language. We show that the performance of a BLEURT-like model on lower resource languages can be improved in this way. A machine translation system is typically evaluated by comparing its output on a given input sentence with one made by a professional translator. Until recently, commonly used metrics such as BLEU (Papineni et al., 2002b) and ROGUE (Lin, 2004) were generally based on number of co-occurring n-grams. Advantages of such methods include that they are easy to interpret, do not require learning from data, and have been shown to generally correlate with human judgement when averaged over a corpus of sentences. Nonetheless, these approaches fail when sentences are semantically similar but differ significantly in phrasing.
arXiv.org Artificial Intelligence
Feb-7-2023