The reporting and analysis of current events around the globe has expanded from professional, editor-lead journalism all the way to citizen journalism. Politicians and other key players enjoy direct access to their audiences through social media, bypassing the filters of official cables or traditional media. However, the multiple advantages of free speech and direct communication are dimmed by the misuse of the media to spread inaccurate or misleading claims. These phenomena have led to the modern incarnation of the fact-checker -- a professional whose main aim is to examine claims using available evidence to assess their veracity. As in other text forensics tasks, the amount of information available makes the work of the fact-checker more difficult. With this in mind, starting from the perspective of the professional fact-checker, we survey the available intelligent technologies that can support the human expert in the different steps of her fact-checking endeavor. These include identifying claims worth fact-checking; detecting relevant previously fact-checked claims; retrieving relevant evidence to fact-check a claim; and actually verifying a claim. In each case, we pay attention to the challenges in future work and the potential impact on real-world fact-checking.
One of the latest collaborations between artificial intelligence and humans is further evidence of how machines and humans can create better results when working together. Artificial intelligence (AI) is now on the job to combat the spread of misinformation on the internet and social platforms thanks to the efforts of start-ups such as Logically. While AI is able to analyze the enormous amounts of info generated daily on a scale that's impossible for humans, ultimately, humans need to be part of the process of fact-checking to ensure credibility. As Lyric Jain, founder and CEO of Logically, said, toxic news travels faster than the truth. Our world desperately needs a way to discern truth from fiction in our news and public, political and economic discussions, and artificial intelligence will help us do that.
We propose a multi-task deep-learning approach for estimating the check-worthiness of claims in political debates. Given a political debate, such as the 2016 US Presidential and Vice-Presidential ones, the task is to predict which statements in the debate should be prioritized for fact-checking. While different fact-checking organizations would naturally make different choices when analyzing the same debate, we show that it pays to learn from multiple sources simultaneously (PolitiFact, FactCheck, ABC, CNN, NPR, NYT, Chicago Tribune, The Guardian, and Washington Post) in a multi-task learning setup, even when a particular source is chosen as a target to imitate. Our evaluation shows state-of-the-art results on a standard dataset for the task of check-worthiness prediction.
Visitors to Australia's federal parliament are often surprised by the robust verbal confrontation between the government and the opposition – technically known as questions without notice, more commonly as question time. A theatrical highpoint of every sitting day, question time is part intellectual cage fight, part kindergarten spat – and all psychological warfare. Political journalists watch the hour-long question time as drought-stricken farmers view the clouds. They look for signs, they read the climate. But what if you were interested in facts?
It's been quite an interesting journey for Factmata since we started in January and we're now about to launch a tool that puts factual context in the hands of the people. This will happen around the UK general election, and marks the completion of our Google Digital News Initiative (DNI) project. For 5 months, we've been working around the clock with a distributed team of NLP researchers, PhDs and scientists from around the world to build this, and now finishing off the final touches. As we prepare for launch, we wanted to tell the world about what's next and where we want to take Factmata in the future. Given our team's work in automated fact-checking in previous research, we are uniquely placed to build AI to solve the problem of online misinformation.