claimbuster
Document-level Claim Extraction and Decontextualisation for Fact-Checking
Deng, Zhenyun, Schlichtkrull, Michael, Vlachos, Andreas
Selecting which claims to check is a time-consuming task for human fact-checkers, especially from documents consisting of multiple sentences and containing multiple claims. However, existing claim extraction approaches focus more on identifying and extracting claims from individual sentences, e.g., identifying whether a sentence contains a claim or the exact boundaries of the claim within a sentence. In this paper, we propose a method for document-level claim extraction for fact-checking, which aims to extract check-worthy claims from documents and decontextualise them so that they can be understood out of context. Specifically, we first recast claim extraction as extractive summarization in order to identify central sentences from documents, then rewrite them to include necessary context from the originating document through sentence decontextualisation. Evaluation with both automatic metrics and a fact-checking professional shows that our method is able to extract check-worthy claims from documents more accurately than previous work, while also improving evidence retrieval.
- Asia > India (0.28)
- Europe > Middle East (0.06)
- Africa > Middle East (0.06)
- (7 more...)
It Takes Nine to Smell a Rat: Neural Multi-Task Learning for Check-Worthiness Prediction
Vasileva, Slavena, Atanasova, Pepa, Màrquez, Lluís, Barrón-Cedeño, Alberto, Nakov, Preslav
We propose a multi-task deep-learning approach for estimating the check-worthiness of claims in political debates. Given a political debate, such as the 2016 US Presidential and Vice-Presidential ones, the task is to predict which statements in the debate should be prioritized for fact-checking. While different fact-checking organizations would naturally make different choices when analyzing the same debate, we show that it pays to learn from multiple sources simultaneously (PolitiFact, FactCheck, ABC, CNN, NPR, NYT, Chicago Tribune, The Guardian, and Washington Post) in a multi-task learning setup, even when a particular source is chosen as a target to imitate. Our evaluation shows state-of-the-art results on a standard dataset for the task of check-worthiness prediction.
- North America > United States > Illinois > Cook County > Chicago (0.25)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Switzerland (0.04)
- (26 more...)
- Media > News (1.00)
- Government > Voting & Elections (0.94)
- Government > Regional Government > North America Government > United States Government (0.46)
Is that a fact? Checking politicians' statements just got a whole lot easier Peter Fray
Visitors to Australia's federal parliament are often surprised by the robust verbal confrontation between the government and the opposition – technically known as questions without notice, more commonly as question time. A theatrical highpoint of every sitting day, question time is part intellectual cage fight, part kindergarten spat – and all psychological warfare. Political journalists watch the hour-long question time as drought-stricken farmers view the clouds. They look for signs, they read the climate. But what if you were interested in facts?
- Oceania > Australia (1.00)
- North America > United States > Texas > Tarrant County > Arlington (0.05)
- North America > United States > New York (0.05)
- (2 more...)
- Government > Voting & Elections (0.71)
- Government > Regional Government > Oceania Government > Australia Government (0.70)