democratic process
Effective Mitigations for Systemic Risks from General-Purpose AI
Uuk, Risto, Brouwer, Annemieke, Schreier, Tim, Dreksler, Noemi, Pulignano, Valeria, Bommasani, Rishi
The systemic risks posed by general-purpose AI models are a growing concern, yet the effectiveness of mitigations remains underexplored. Previous research has proposed frameworks for risk mitigation, but has left gaps in our understanding of the perceived effectiveness of measures for mitigating systemic risks. Our study addresses this gap by evaluating how experts perceive different mitigations that aim to reduce the systemic risks of general-purpose AI models. We surveyed 76 experts whose expertise spans AI safety; critical infrastructure; democratic processes; chemical, biological, radiological, and nuclear risks (CBRN); and discrimination and bias. Among 27 mitigations identified through a literature review, we find that a broad range of risk mitigation measures are perceived as effective in reducing various systemic risks and technically feasible by domain experts. In particular, three mitigation measures stand out: safety incident reports and security information sharing, third-party pre-deployment model audits, and pre-deployment risk assessments. These measures show both the highest expert agreement ratings (>60\%) across all four risk areas and are most frequently selected in experts' preferred combinations of measures (>40\%). The surveyed experts highlighted that external scrutiny, proactive evaluation and transparency are key principles for effective mitigation of systemic risks. We provide policy recommendations for implementing the most promising measures, incorporating the qualitative contributions from experts. These insights should inform regulatory frameworks and industry practices for mitigating the systemic risks associated with general-purpose AI.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Asia > South Korea > Seoul > Seoul (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (10 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Overview (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government > Military > Cyberwarfare (0.46)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.93)
African Democracy in the Era of Generative Disinformation: Challenges and Countermeasures against AI-Generated Propaganda
In light of prominent discourse around the negative implications of generative AI, an emerging area of research is investigating the current and estimated impacts of AI-generated propaganda on African citizens participating in elections. Throughout Africa, there have already been suspected cases of AI-generated propaganda influencing electoral outcomes or precipitating coups in countries like Nigeria, Burkina Faso, and Gabon, underscoring the need for comprehensive research in this domain. This paper aims to highlight the risks associated with the spread of generative AI-driven disinformation within Africa while concurrently examining the roles of government, civil society, academia, and the general public in the responsible development, practical use, and robust governance of AI. To understand how African governments might effectively counteract the impact of AI-generated propaganda, this paper presents case studies illustrating the current usage of generative AI for election-related propaganda in Africa. Subsequently, this paper discusses efforts by fact-checking organisations to mitigate the negative impacts of disinformation, explores the potential for new initiatives to actively engage citizens in literacy efforts to combat disinformation spread, and advocates for increased governmental regulatory measures. Overall, this research seeks to increase comprehension of the potential ramifications of AI-generated propaganda on democratic processes within Africa and propose actionable strategies for stakeholders to address these multifaceted challenges.
- Media > News (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- (2 more...)
Understanding "Democratization" in NLP and ML Research
Subramonian, Arjun, Gautam, Vagrant, Klakow, Dietrich, Talat, Zeerak
Recent improvements in natural language processing (NLP) and machine learning (ML) and increased mainstream adoption have led to researchers frequently discussing the "democratization" of artificial intelligence. In this paper, we seek to clarify how democratization is understood in NLP and ML publications, through large-scale mixed-methods analyses of papers using the keyword "democra*" published in NLP and adjacent venues. We find that democratization is most frequently used to convey (ease of) access to or use of technologies, without meaningfully engaging with theories of democratization, while research using other invocations of "democra*" tends to be grounded in theories of deliberation and debate. Based on our findings, we call for researchers to enrich their use of the term democratization with appropriate theory, towards democratic technologies beyond superficial access.
- Asia > Japan (0.28)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (12 more...)
- Government > Voting & Elections (1.00)
- Media (0.94)
- Law (0.68)
- (2 more...)
Generative Social Choice
Fish, Sara, Gölz, Paul, Parkes, David C., Procaccia, Ariel D., Rusak, Gili, Shapira, Itai, Wüthrich, Manuel
Traditionally, social choice theory has only been applicable to choices among a few predetermined alternatives but not to more complex decisions such as collectively selecting a textual statement. We introduce generative social choice, a framework that combines the mathematical rigor of social choice theory with the capability of large language models to generate text and extrapolate preferences. This framework divides the design of AI-augmented democratic processes into two components: first, proving that the process satisfies rigorous representation guarantees when given access to oracle queries; second, empirically validating that these queries can be approximately implemented using a large language model. We apply this framework to the problem of generating a slate of statements that is representative of opinions expressed as free-form text; specifically, we develop a democratic process with representation guarantees and use this process to represent the opinions of participants in a survey about chatbot personalization. We find that 93 out of 100 participants feel "mostly" or "perfectly" represented by the slate of five statements we extracted.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Republic of Türkiye > Konya Province > Konya (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.46)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.92)
- Government (0.92)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.94)
AI and Democracy's Digital Identity Crisis
Jain, Shrey, Spelliscy, Connor, Vance-Law, Samuel, Moore, Scott
AI-enabled tools have become sophisticated enough to allow a small number of individuals to run disinformation campaigns of an unprecedented scale. Privacy-preserving identity attestations can drastically reduce instances of impersonation and make disinformation easy to identify and potentially hinder. By understanding how identity attestations are positioned across the spectrum of decentralization, we can gain a better understanding of the costs and benefits of various attestations. In this paper, we discuss attestation types, including governmental, biometric, federated, and web of trust-based, and include examples such as e-Estonia, China's social credit system, Worldcoin, OAuth, X (formerly Twitter), Gitcoin Passport, and EAS. We believe that the most resilient systems create an identity that evolves and is connected to a network of similarly evolving identities that verify one another. In this type of system, each entity contributes its respective credibility to the attestation process, creating a larger, more comprehensive set of attestations. We believe these systems could be the best approach to authenticating identity and protecting against some of the threats to democracy that AI can pose in the hands of malicious actors. However, governments will likely attempt to mitigate these risks by implementing centralized identity authentication systems; these centralized systems could themselves pose risks to the democratic processes they are built to defend. We therefore recommend that policymakers support the development of standards-setting organizations for identity, provide legal clarity for builders of decentralized tooling, and fund research critical to effective identity authentication systems.
- Europe > Estonia (0.36)
- Asia > China (0.26)
- Asia > Afghanistan (0.05)
- (13 more...)
- Media (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
AI-enhanced images a 'threat to democratic processes', experts warn
Experts have warned that action needs to be taken on the use of artificial intelligence-generated or enhanced images in politics after a Labour MP apologised for sharing a manipulated image of Rishi Sunak pouring a pint. Karl Turner, the MP for Hull East, shared an image on the rebranded Twitter platform, X, showing the prime minister pulling a sub-standard pint at the Great British beer festival while a woman looks on with a derisive expression. The image had been manipulated from an original photo in which Sunak appears to have pulled a pub-level pint while the person behind him has a neutral expression. The image brought criticism from the Conservatives, with the deputy prime minister, Oliver Dowden, calling it "unacceptable". "I think that the Labour leader should disown this and Labour MPs who have retweeted this or shared this should delete the image, it is clearly misleading," Dowden told LBC on Thursday.
- Government > Regional Government (0.93)
- Government > Voting & Elections (0.71)
Op-ed: The EU's Artificial Intelligence Act does little to protect democracy
Let me introduce you to Marie. Marie is a 28-year-old professional and while on her way home from work is talking to a TikTok follower about the French elections. This follower has an uncanny ability to touch on subjects that mean the most to her. Almost overnight, Marie's social media feeds become increasingly filled with political themes, until on election day, her vote has already been heavily influenced. The trouble is the TikTok follower is not a person, but an artificial intelligence-driven bot, exploiting personal but publicly available data about Marie to manipulate her opinion.
- Government > Voting & Elections (0.70)
- Government > Regional Government > Europe Government > France Government (0.36)
Google's threat to withdraw its search engine from Australia is chilling to anyone who cares about democracy Peter Lewis
Google's testimony to an Australian Senate committee on Friday threatening to withdraw its search services from Australia is chilling to anyone who cares about democracy. It marks the latest escalation in the globally significant effort to regulate the way the big tech platforms use news content to drive their advertising businesses and the catastrophic impact on the news media across the world. The news bargaining code, which would require Google and Facebook to negotiate a fair price for the use of news content, is the product of an 18-month process driven by the competition regulator. That legislation is currently before the Australian parliament, where a Senate committee is taking final submissions from interested parties. The Google bombshell makes explicit what has been a slowly escalating threat that a binding code would not be tenable.
- Oceania > Australia (0.95)
- North America > United States (0.16)
- Law (1.00)
- Government (1.00)
- Information Technology > Services (0.94)
- Media > News (0.70)
Don't Make Artificial Intelligence Artificially Stupid in the Name of Transparency
Artificial intelligence systems are going to crash some of our cars, and sometimes they're going to recommend longer sentences for black Americans than for whites. We know this because they've already gone wrong in these ways. But this doesn't mean that we should insist--as many, including the European Commission's General Data Protection Regulation, do--that artificial intelligence should be able to explain how it came up with its conclusions in every non-trivial case. David Weinberger (@dweinberger) is a senior researcher at the Harvard Berkman Klein Center for Internet & Society. Demanding explicability sounds fine, but achieving it may require making artificial intelligence artificially stupid. And given the promise of the type of AI called machine learning, a dumbing-down of this technology could mean failing to diagnose diseases, overlooking significant causes of climate change, or making our educational system excessively one-size-fits all.