Goto

Collaborating Authors

 factchecker


Fears AI factcheckers on X could increase promotion of conspiracy theories

The Guardian

A decision by Elon Musk's X social media platform to enlist artificial intelligence chatbots to draft factchecks risks increasing the promotion of "lies and conspiracy theories", a former UK technology minister has warned. Damian Collins accused Musk's firm of "leaving it to bots to edit the news" after X announced on Tuesday that it would allow large language models to write community notes to clarify or correct contentious posts, before users approve them for publication. The notes have previously been written by humans. X said using AI to write factchecking notes – which sit beneath some X posts – "advances the state of the art in improving information quality on the internet". Keith Coleman, the vice-president of product at X, said humans would review AI-generated notes and the note would appear only if people with a variety of viewpoints found it useful.


From beef noodles to bots: Taiwan's factcheckers on fighting Chinese disinformation and 'unstoppable' AI

The Guardian

Charles Yeh's battle with disinformation in Taiwan began with a bowl of beef noodles. Nine years ago, the Taiwanese engineer was at a restaurant with his family when his mother-in-law started picking the green onions out of her food. Asked what she was doing, she explained that onions can harm your liver. She knew this, she said, because she had received text messages telling her so. Yeh was puzzled by this. His family had always happily eaten green onions.


The Earth is Flat? Unveiling Factual Errors in Large Language Models

Wang, Wenxuan, Shi, Juluan, Tu, Zhaopeng, Yuan, Youliang, Huang, Jen-tse, Jiao, Wenxiang, Lyu, Michael R.

arXiv.org Artificial Intelligence

Large Language Models (LLMs) like ChatGPT are foundational in various applications due to their extensive knowledge from pre-training and fine-tuning. Despite this, they are prone to generating factual and commonsense errors, raising concerns in critical areas like healthcare, journalism, and education to mislead users. Current methods for evaluating LLMs' veracity are limited by test data leakage or the need for extensive human labor, hindering efficient and accurate error detection. To tackle this problem, we introduce a novel, automatic testing framework, FactChecker, aimed at uncovering factual inaccuracies in LLMs. This framework involves three main steps: First, it constructs a factual knowledge graph by retrieving fact triplets from a large-scale knowledge database. Then, leveraging the knowledge graph, FactChecker employs a rule-based approach to generates three types of questions (Yes-No, Multiple-Choice, and WH questions) that involve single-hop and multi-hop relations, along with correct answers. Lastly, it assesses the LLMs' responses for accuracy using tailored matching strategies for each question type. Our extensive tests on six prominent LLMs, including text-davinci-002, text-davinci-003, ChatGPT~(gpt-3.5-turbo, gpt-4), Vicuna, and LLaMA-2, reveal that FactChecker can trigger factual errors in up to 45\% of questions in these models. Moreover, we demonstrate that FactChecker's test cases can improve LLMs' factual accuracy through in-context learning and fine-tuning (e.g., llama-2-13b-chat's accuracy increase from 35.3\% to 68.5\%). We are making all code, data, and results available for future research endeavors.


Customising annotation tools for factchecking at scale

#artificialintelligence

Full Fact is the UK's independent, non-partisan, factchecking charity. We are funded by individuals, trusts, foundations and many others. Our work on automation is funded by Google, Open Society Foundations and Omidyar Network. We work to anchor public debate to reality by providing independent factchecking. We've been doing this since 2010.


The ambitious, possibly dangerous plan to fight fake news with AI

#artificialintelligence

Facts matter: but is that statement a fact? Suppose it could be proven or refuted – what then? What's the best way to tell someone they've made an error? How do you nudge the world in the direction of truth? These are the questions professional factcheckers wrestle with on a daily basis.