Mining the Explainability and Generalization: Fact Verification Based on Self-Instruction
–arXiv.org Artificial Intelligence
Fact-checking based on commercial LLMs has become mainstream. Although these methods offer high explainability, it falls short in accuracy compared to traditional fine-tuning approaches, and data security is also a significant concern. In this paper, we propose a self-instruction based fine-tuning approach for fact-checking that balances accuracy and explainability. Our method consists of Data Augmentation and Improved DPO fine-tuning. The former starts by instructing the model to generate both positive and negative explanations based on claim-evidence pairs and labels, then sampling the dataset according to our customized difficulty standards. The latter employs our proposed improved DPO to fine-tune the model using the generated samples. We fine-tune the smallest-scale LLaMA-7B model and evaluate it on the challenging fact-checking datasets FEVEROUS and HOVER, utilizing four fine-tuning methods and three few-shot learning methods for comparison. The experiments demonstrate that our approach not only retains accuracy comparable to, or even surpassing, traditional fine-tuning methods, but also generates fluent explanation text. Moreover, it also exhibit high generalization performance. Our method is the first to leverage self-supervised learning for fact-checking and innovatively combines contrastive learning and improved DPO in fine-tuning LLMs, as shown in the experiments.
arXiv.org Artificial Intelligence
May-23-2024
- Country:
- Asia > Middle East
- UAE (0.14)
- Europe (1.00)
- North America > United States
- California (0.14)
- New York (0.14)
- Virginia (0.14)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Government (1.00)
- Information Technology > Security & Privacy (1.00)
- Technology: