Are Large Language Models State-of-the-art Quality Estimators for Machine Translation of User-generated Content?
Qian, Shenbin, Orăsan, Constantin, Kanojia, Diptesh, Carmo, Félix do
–arXiv.org Artificial Intelligence
This paper investigates whether large language models (LLMs) are state-of-the-art quality estimators for machine translation of user-generated content (UGC) that contains emotional expressions, without the use of reference translations. To achieve this, we employ an existing emotion-related dataset with human-annotated errors and calculate quality evaluation scores based on the Multi-dimensional Quality Metrics. We compare the accuracy of several LLMs with that of our fine-tuned baseline models, under in-context learning and parameter-efficient fine-tuning (PEFT) scenarios. We find that PEFT of LLMs leads to better performance in score prediction with human interpretable explanations than fine-tuned models. However, a manual analysis of LLM outputs reveals that they still have problems such as refusal to reply to a prompt and unstable output while evaluating machine translation of UGC.
arXiv.org Artificial Intelligence
Oct-8-2024
- Country:
- Asia
- China > Hong Kong (0.05)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Singapore (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Europe
- Bulgaria > Sofia City Province
- Sofia (0.04)
- Finland > Pirkanmaa
- Tampere (0.04)
- Germany (0.04)
- United Kingdom > England
- Surrey (0.04)
- Bulgaria > Sofia City Province
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.14)
- Asia
- Genre:
- Research Report (1.00)
- Technology: