FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs
Choi, Eun Cheol, Ferrara, Emilio
–arXiv.org Artificial Intelligence
The fact-checking process, though complex and labor-intensive encompassing several stages from claim identification to drawing final conclusions, [5, 7] could be made more efficient through AI tools [1]. It is, however, critical to note that a complete automation could undermine journalistic principles and practices [18], thereby indicating the goal lies in enhancing, not replacing, human expertise [4]. A key element in monitoring the spread of false claims across various communication platforms is claim matching, where new instances of previously fact-checked claims are identified [21]. The importance of claim matching stems from the tendency of false claims to be reused and reiterated in different formats [18]. Effective claim matching can expedite the early detection of misinformation, content moderation, and automated debunking [8]. This paper explores the potential utilization of large language models (LLMs) to support the claim matching stage in the fact-checking procedure. Our study reveals that when fine-tuned appropriately, LLMs can effectively match claims. Our framework could benefit fact-checkers by minimizing redundant verification, support online platforms in content moderation, and assist researchers in the extensive analysis of misinformation from a large corpus.
arXiv.org Artificial Intelligence
Feb-8-2024
- Country:
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Genre:
- Research Report (0.82)
- Industry:
- Health & Medicine > Therapeutic Area
- Immunology (0.48)
- Media > News (0.72)
- Health & Medicine > Therapeutic Area
- Technology: