Goto

Collaborating Authors

 claimbuster


Document-level Claim Extraction and Decontextualisation for Fact-Checking

Deng, Zhenyun, Schlichtkrull, Michael, Vlachos, Andreas

arXiv.org Artificial Intelligence

Selecting which claims to check is a time-consuming task for human fact-checkers, especially from documents consisting of multiple sentences and containing multiple claims. However, existing claim extraction approaches focus more on identifying and extracting claims from individual sentences, e.g., identifying whether a sentence contains a claim or the exact boundaries of the claim within a sentence. In this paper, we propose a method for document-level claim extraction for fact-checking, which aims to extract check-worthy claims from documents and decontextualise them so that they can be understood out of context. Specifically, we first recast claim extraction as extractive summarization in order to identify central sentences from documents, then rewrite them to include necessary context from the originating document through sentence decontextualisation. Evaluation with both automatic metrics and a fact-checking professional shows that our method is able to extract check-worthy claims from documents more accurately than previous work, while also improving evidence retrieval.


It Takes Nine to Smell a Rat: Neural Multi-Task Learning for Check-Worthiness Prediction

Vasileva, Slavena, Atanasova, Pepa, Màrquez, Lluís, Barrón-Cedeño, Alberto, Nakov, Preslav

arXiv.org Artificial Intelligence

We propose a multi-task deep-learning approach for estimating the check-worthiness of claims in political debates. Given a political debate, such as the 2016 US Presidential and Vice-Presidential ones, the task is to predict which statements in the debate should be prioritized for fact-checking. While different fact-checking organizations would naturally make different choices when analyzing the same debate, we show that it pays to learn from multiple sources simultaneously (PolitiFact, FactCheck, ABC, CNN, NPR, NYT, Chicago Tribune, The Guardian, and Washington Post) in a multi-task learning setup, even when a particular source is chosen as a target to imitate. Our evaluation shows state-of-the-art results on a standard dataset for the task of check-worthiness prediction.


Is that a fact? Checking politicians' statements just got a whole lot easier Peter Fray

#artificialintelligence

Visitors to Australia's federal parliament are often surprised by the robust verbal confrontation between the government and the opposition – technically known as questions without notice, more commonly as question time. A theatrical highpoint of every sitting day, question time is part intellectual cage fight, part kindergarten spat – and all psychological warfare. Political journalists watch the hour-long question time as drought-stricken farmers view the clouds. They look for signs, they read the climate. But what if you were interested in facts?