Tech giants like Facebook and governments around the world are struggling to deal with disinformation, from misleading posts about vaccines to incitement of sectarian violence. As artificial intelligence becomes more powerful, experts worry that disinformation generated by A.I. could make an already complex problem bigger and even more difficult to solve. In recent months, two prominent labs -- OpenAI in San Francisco and the Allen Institute for Artificial Intelligence in Seattle -- have built particularly powerful examples of this technology. Both have warned that it could become increasingly dangerous. Alec Radford, a researcher at OpenAI, argued that this technology could help governments, companies and other organizations spread disinformation far more efficiently: Rather than hire human workers to write and distribute propaganda, these organizations could lean on machines to compose believable and varied content at tremendous scale.
We are in the midst of a "tech-lash." For months, the leading Internet companies have faced a wave of criticism sparked by revelations that they unwittingly enabled the spread of Russian disinformation that distorted the 2016 election. They are now beginning to listen. Recently, Facebook responded when chief executive Mark Zuckerberg announced that his company is revamping its flagship News Feed service: The algorithm powering it will now prioritize content shared by your friends and family over news stories and viral videos. The company followed up by announcing it will survey users and potentially relegate untrusted outlets.
Public opinion can be influenced by malicious actors trying to degrade public trust in media and institutions, discredit political leadership, deepen societal divides, and influence citizens' voting decisions. The rise in AI and algorithmic governance of web apps has led to propagation of racial and gender biases, reaffirmation of beliefs through personalised content, infringement of user privacy, and manipulation of the user and their data. A double-edged sword, AI is used to both spread and counter disinformation. Though the number of fact-checking initiatives has quadrupled from 2013 to 2018, manual fact-checking is increasingly ineffective given the substantial volume of disinformation. So, automated fact-checking (AFC) tools have been developed by non-profits such as Full Fact and Chequeado.
It's one thing to note that tech companies seem to be immune to the bad press that comes from data breaches and intentionally misleading content propagating on their platforms. It's an altogether scarier prospect that those platforms and disinformation actors might sometimes be working toward similar goals. According to Dipayan Piku Ghosh, a digital-privacy expert at Harvard's Kennedy School of Government,"the commercial interests of internet platforms like Facebook and those of disinformation operators are at some points aligned." Ghosh specified that keeping users engaged for as long as possible is a core goal for both internet companies and entities spreading false information. "For the internet platform, it allows them to create more ad space and collect more data," he said on Thursday at the Aspen Ideas Festival, which is co-hosted by the Aspen Institute and The Atlantic.
The study examines the trade-offs in using automated technology to limit the spread of disinformation online. It presents options (from self-regulatory to legislative) to regulate automated content recognition (ACR) technologies in this context. Special attention is paid to the opportunities for the European Union as a whole to take the lead in setting the framework for designing these technologies in a way that enhances accountability and transparency and respects free speech. The present project reviews some of the key academic and policy ideas on technology and disinformation and highlights their relevance to European policy. Chapter 1 introduces the background to the study and presents the definitions used.