Factuality Challenges in the Era of Large Language Models

Augenstein, Isabelle, Baldwin, Timothy, Cha, Meeyoung, Chakraborty, Tanmoy, Ciampaglia, Giovanni Luca, Corney, David, DiResta, Renee, Ferrara, Emilio, Hale, Scott, Halevy, Alon, Hovy, Eduard, Ji, Heng, Menczer, Filippo, Miguez, Ruben, Nakov, Preslav, Scheufele, Dietram, Sharma, Shivam, Zagni, Giovanni

arXiv.org Artificial Intelligence 

The emergence of tools based on Large Language Models (LLMs), such as OpenAI's ChatGPT, Microsoft's Bing Chat, and Google's Bard, has garnered immense public attention. These incredibly useful, natural-sounding tools mark significant advances in natural language generation, yet they exhibit a propensity to generate false, erroneous, or misleading content -- commonly referred to as "hallucinations." Moreover, LLMs can be exploited for malicious applications, such as generating false but credible-sounding content and profiles at scale. This poses a significant challenge to society in terms of the potential deception of users and the increasing dissemination of inaccurate information. In light of these risks, we explore the kinds of technological innovations, regulatory reforms, and AI literacy initiatives needed from fact-checkers, news organizations, and the broader research and policy communities. By identifying the risks, the imminent threats, and some viable solutions, we seek to shed light on navigating various aspects of veracity in the era of generative AI.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found