Fact-Checking Complex Claims with Program-Guided Reasoning
Pan, Liangming, Wu, Xiaobao, Lu, Xinyuan, Luu, Anh Tuan, Wang, William Yang, Kan, Min-Yen, Nakov, Preslav
–arXiv.org Artificial Intelligence
Fact-checking real-world claims often requires collecting multiple pieces of evidence and applying complex multi-step reasoning. In this paper, we present Program-Guided Fact-Checking (ProgramFC), a novel fact-checking model that decomposes complex claims into simpler sub-tasks that can be solved using a shared library of specialized functions. We first leverage the in-context learning ability of large language models to generate reasoning programs to guide the verification process. Afterward, we execute the program by delegating each sub-task to the corresponding sub-task handler. This process makes our model both explanatory and data-efficient, providing clear explanations of its reasoning process and requiring minimal training data. We evaluate ProgramFC on two challenging fact-checking datasets and show that it outperforms seven fact-checking baselines across different settings of evidence availability, with explicit output programs that benefit human debugging. Our codes and data are publicly available at https://github.com/mbzuai-nlp/ProgramFC.
arXiv.org Artificial Intelligence
May-22-2023
- Country:
- Asia (1.00)
- Europe (1.00)
- North America > United States
- California (0.67)
- Genre:
- Research Report (0.82)
- Workflow (0.93)
- Industry:
- Health & Medicine (0.93)
- Leisure & Entertainment > Sports
- Hockey (0.68)
- Motorsports (0.93)
- Media > Film (0.68)
- Technology: