Improving the fact-checking performance of language models by relying on their entailment ability
Kumar, Gaurav, Mazumder, Debajyoti, Garg, Ayush, Patro, Jasabanta
–arXiv.org Artificial Intelligence
Automated fact-checking has been a challenging task for the research community. Past works tried various strategies, such as end-to-end training, retrieval-augmented generation, and prompt engineering, to build robust fact-checking systems. However, their accuracy has not been very high for real-world deployment. We, on the other hand, propose a simple yet effective strategy, where entailed justifications generated by LLMs are used to train encoder-only language models (ELMs) for fact-checking. We conducted a rigorous set of experiments, comparing our approach with recent works and various prompting and fine-tuning strategies to demonstrate the superiority of our approach. Additionally, we did quality analysis of model explanations, ablation studies, and error analysis to provide a comprehensive understanding of our approach.
arXiv.org Artificial Intelligence
Oct-22-2025
- Country:
- Asia (1.00)
- Europe (0.92)
- North America > United States
- Wisconsin (0.14)
- Genre:
- Research Report > New Finding (0.67)
- Workflow (0.92)
- Industry:
- Banking & Finance
- Education (0.67)
- Government
- Health & Medicine (1.00)
- Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Media > News (0.93)
- Technology: