Prompting Large Language Models with Human Error Markings for Self-Correcting Machine Translation
Berger, Nathaniel, Riezler, Stefan, Exel, Miriam, Huck, Matthias
–arXiv.org Artificial Intelligence
While large language models (LLMs) pre-trained on massive amounts of unpaired language data have reached the state-of-the-art in machine translation (MT) of general domain texts, post-editing (PE) is still required to correct errors and to enhance term translation quality in specialized domains. In this paper we present a pilot study of enhancing translation memories (TM) produced by PE (source segments, machine translations, and reference translations, henceforth called PE-TM) for the needs of correct and consistent term translation in technical domains. We investigate a light-weight two-step scenario where, at inference time, a human translator marks errors in the first translation step, and in a second step a few similar examples are extracted from the PE-TM to prompt an LLM. Our experiment shows that the additional effort of augmenting translations with human error markings guides the LLM to focus on a correction of the marked errors, yielding consistent improvements over automatic PE (APE) and MT from scratch.
arXiv.org Artificial Intelligence
Jun-4-2024
- Country:
- Asia > Middle East
- Republic of Türkiye (0.14)
- Europe
- Germany > Baden-Württemberg (0.14)
- Portugal > Lisbon
- Lisbon (0.14)
- North America > United States (0.68)
- Asia > Middle East
- Genre:
- Research Report (0.40)
- Technology: