Leveraging Large Language Models for Building Interpretable Rule-Based Data-to-Text Systems
Warczyński, Jędrzej, Lango, Mateusz, Dusek, Ondrej
–arXiv.org Artificial Intelligence
We introduce a simple approach that uses a large language model (LLM) to automatically implement a fully interpretable rule-based data-to-text system in pure Python. Experimental evaluation on the WebNLG dataset showed that such a constructed system produces text of better quality (according to the BLEU and BLEURT metrics) than the same LLM prompted to directly produce outputs, and produces fewer hallucinations than a BART language model fine-tuned on the same data. Furthermore, at runtime, the approach generates text in a fraction of the processing time required by neural approaches, using only a single CPU
arXiv.org Artificial Intelligence
Feb-27-2025
- Country:
- Europe
- Austria (0.15)
- Czechia (0.14)
- Middle East > Malta (0.14)
- Poland (0.14)
- North America > United States
- Michigan (0.14)
- Europe
- Genre:
- Research Report (1.00)
- Technology: