Reducing Hallucinations in Summarization via Reinforcement Learning with Entity Hallucination Index
Katwe, Praveenkumar, Chandra, Rakesh, Kali, Balabantaray, Vittala, Prasad
–arXiv.org Artificial Intelligence
Reducing hallucinations in abstractive summarization remains a critical challenge for deploying language models (LMs) in real-world settings. In this work, we introduce a rewarddriven fine-tuning framework that explicitly optimizes for Entity Hallucination Index (EHI), a metric designed to quantify the presence, correctness, and grounding of named entities in generated summaries. Given a corpus of meeting transcripts, we first generate baseline summaries using a pre-trained LM and compute EHI scores via automatic entity extraction and matching. We then apply reinforcement learning to fine-tune the model parameters, using EHI as a reward signal to bias generation toward entity-faithful outputs. Our approach does not rely on human-written factuality annotations, enabling scalable fine-tuning. Experiments demonstrate consistent improvements in EHI across datasets, with qualitative analysis revealing a significant reduction in entity-level hallucinations without degradation in fluency or informativeness. We release a reproducible Colab pipeline, facilitating further research on hallucination-aware model fine-tuning using lightweight, hallucintion metrics like EHI.
arXiv.org Artificial Intelligence
Jul-31-2025
- Country:
- Asia > India
- Karnataka > Bengaluru (0.04)
- Odisha > Bhubaneshwar (0.04)
- Europe > Spain
- Catalonia > Barcelona Province > Barcelona (0.04)
- North America > United States (0.04)
- Asia > India
- Genre:
- Research Report > New Finding (0.46)
- Technology: