Evaluating Automatic Metrics with Incremental Machine Translation Systems
Wu, Guojun, Cohen, Shay B., Sennrich, Rico
–arXiv.org Artificial Intelligence
We introduce a dataset comprising commercial machine translations, gathered weekly over six years across 12 translation directions. Since human A/B testing is commonly used, we assume commercial systems improve over time, which enables us to evaluate machine translation (MT) metrics based on their preference for more recent translations. Our study confirms several previous findings in MT metrics research and demonstrates the dataset's value as a testbed for metric evaluation. We release our code at https://github.com/gjwubyron/Evo
arXiv.org Artificial Intelligence
Jul-3-2024
- Country:
- Europe (1.00)
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Genre:
- Research Report > New Finding (0.94)
- Technology: