Generative Large Language Models in Automated Fact-Checking: A Survey
Vykopal, Ivan, Pikuliak, Matúš, Ostermann, Simon, Šimko, Marián
–arXiv.org Artificial Intelligence
The dissemination of false information across online platforms poses a serious societal challenge, necessitating robust measures for information verification. While manual fact-checking efforts are still instrumental, the growing volume of false information requires automated methods. Large language models (LLMs) offer promising opportunities to assist fact-checkers, leveraging LLM's extensive knowledge and robust reasoning capabilities. In this survey paper, we investigate the utilization of generative LLMs in the realm of fact-checking, illustrating various approaches that have been employed and techniques for prompting or fine-tuning LLMs. By providing an overview of existing approaches, this survey aims to improve the understanding of utilizing LLMs in fact-checking and to facilitate further progress in LLMs' involvement in this process.
arXiv.org Artificial Intelligence
Jul-2-2024
- Country:
- Asia > Middle East
- UAE (0.14)
- Europe (1.00)
- North America > United States
- Louisiana (0.14)
- New Mexico (0.14)
- Asia > Middle East
- Genre:
- Overview (1.00)
- Industry:
- Health & Medicine (1.00)
- Media > News (0.30)
- Technology: