Goto

Collaborating Authors

 ai risk


Why They Disagree: Decoding Differences in Opinions about AI Risk on the Lex Fridman Podcast

Truong, Nghi, Puranam, Phanish, Koçak, Özgecan

arXiv.org Artificial Intelligence

The emergence of transformative technologies often surfaces deep societal divisions, nowhere more evident than in contemporary debates about artificial intelligence (AI). A striking feature of these divisions is that they persist despite shared interests in ensuring that AI benefits humanity and avoiding catastrophic outcomes. This paper analyzes contemporary debates about AI risk, parsing the differences between the "doomer" and "boomer" perspectives into definitional, factual, causal, and moral premises to identify key points of contention. We find that differences in perspectives about existential risk ("X-risk") arise fundamentally from differences in causal premises about design vs. emergence in complex systems, while differences in perspectives about employment risks ("E-risks") pertain to different causal premises about the applicability of past theories (evolution) vs their inapplicability (revolution). Disagreements about these two forms of AI risk appear to share two properties: neither involves significant disagreements on moral values and both can be described in terms of differing views on the extent of boundedness of human rationality. Our approach to analyzing reasoning chains at scale, using an ensemble of LLMs to parse textual data, can be applied to identify key points of contention in debates about risk to the public in any arena.


The 2025 OpenAI Preparedness Framework does not guarantee any AI risk mitigation practices: a proof-of-concept for affordance analyses of AI safety policies

Coggins, Sam, Saeri, Alexander K., Daniell, Katherine A., Ruster, Lorenn P., Liu, Jessie, Davis, Jenny L.

arXiv.org Artificial Intelligence

The 2025 OpenAI Preparedness Framework does not guarantee any AI risk mitigation practices: a proof-of-concept for affordance analyses of AI safety policies. Abstract Prominent AI companies are producing'safety frameworks' as a type of voluntary self-governance. These statements purport to establish risk thresholds and safety procedures for the development and deployment of highly capable AI. Understanding which AI risks are covered and what actions are allowed, refused, demanded, encouraged, or discouraged by these statements is vital for assessing how these frameworks actually govern AI development and deployment. We draw on affordance theory to analyse the OpenAI'Preparedness Framework Version 2' (April 2025) using the Mechanisms & Conditions model of affordances and the MIT AI Risk Repository. We find that this safety policy requests evaluation of a small minority of AI risks, encourages deployment of systems with'Medium' capabilities for unintentionally enabling'severe harm' (which OpenAI defines as >1000 deaths or >$100B in damages), and allows OpenAI's CEO to deploy even more dangerous capabilities. These findings suggest that effective mitigation of AI risks requires more robust governance interventions beyond current industry self-regulation. Our affordance analysis provides a replicable method for evaluating what safety frameworks actually permit versus what they claim.


Are Companies Taking AI Risks Seriously? A Systematic Analysis of Companies' AI Risk Disclosures in SEC 10-K forms

Marin, Lucas G. Uberti-Bona, Rijsbosch, Bram, Spanakis, Gerasimos, Kollnig, Konrad

arXiv.org Artificial Intelligence

As Artificial Intelligence becomes increasingly central to corporate strategies, concerns over its risks are growing too. In response, regulators are pushing for greater transparency in how companies identify, report and mitigate AI-related risks. In the US, the Securities and Exchange Commission (SEC) repeatedly warned companies to provide their investors with more accurate disclosures of AI-related risks; recent enforcement and litigation against companies' misleading AI claims reinforce these warnings. In the EU, new laws - like the AI Act and Digital Services Act - introduced additional rules on AI risk reporting and mitigation. Given these developments, it is essential to examine if and how companies report AI-related risks to the public. This study presents the first large-scale systematic analysis of AI risk disclosures in SEC 10-K filings, which require public companies to report material risks to their company. We analyse over 30,000 filings from more than 7,000 companies over the past five years, combining quantitative and qualitative analysis. Our findings reveal a sharp increase in the companies that mention AI risk, up from 4% in 2020 to over 43% in the most recent 2024 filings. While legal and competitive AI risks are the most frequently mentioned, we also find growing attention to societal AI risks, such as cyberattacks, fraud, and technical limitations of AI systems. However, many disclosures remain generic or lack details on mitigation strategies, echoing concerns raised recently by the SEC about the quality of AI-related risk reporting. To support future research, we publicly release a web-based tool for easily extracting and analysing keyword-based disclosures across SEC filings.


The AI Model Risk Catalog: What Developers and Researchers Miss About Real-World AI Harms

Rao, Pooja S. B., Šćepanović, Sanja, Jayagopi, Dinesh Babu, Cherubini, Mauro, Quercia, Daniele

arXiv.org Artificial Intelligence

We analyzed nearly 460,000 AI model cards from Hugging Face to examine how developers report risks. From these, we extracted around 3,000 unique risk mentions and built the \emph{AI Model Risk Catalog}. We compared these with risks identified by researchers in the MIT Risk Repository and with real-world incidents from the AI Incident Database. Developers focused on technical issues like bias and safety, while researchers emphasized broader social impacts. Both groups paid little attention to fraud and manipulation, which are common harms arising from how people interact with AI. Our findings show the need for clearer, structured risk reporting that helps developers think about human-interaction and systemic risks early in the design process. The catalog and paper appendix are available at: https://social-dynamics.net/ai-risks/catalog.


Dimensional Characterization and Pathway Modeling for Catastrophic AI Risks

Chin, Ze Shen

arXiv.org Artificial Intelligence

Although discourse around the risks of Artificial Intelligence (AI) has grown, it often lacks a comprehensive, multidimensional framework, and concrete causal pathways mapping hazard to harm. This paper aims to bridge this gap by examining six commonly discussed AI catastrophic risks: CBRN, cyber offense, sudden loss of control, gradual loss of control, environmental risk, and geopolitical risk. First, we characterize these risks across seven key dimensions, namely intent, competency, entity, polarity, linearity, reach, and order. Next, we conduct risk pathway modeling by mapping step-by-step progressions from the initial hazard to the resulting harms. The dimensional approach supports systematic risk identification and generalizable mitigation strategies, while risk pathway models help identify scenario-specific interventions. Together, these methods offer a more structured and actionable foundation for managing catastrophic AI risks across the value chain.


Artificial Intelligence in Deliberation: The AI Penalty and the Emergence of a New Deliberative Divide

Jungherr, Andreas, Rauchfleisch, Adrian

arXiv.org Artificial Intelligence

Digital deliberation has expanded democratic participation, yet challenges remain. This includes processing information at scale, moderating discussions, fact-checking, or attracting people to participate. Recent advances in artificial intelligence (AI) offer potential solutions, but public perceptions of AI's role in deliberation remain underexplored. Beyond efficiency, democratic deliberation is about voice and recognition. If AI is integrated into deliberation, public trust, acceptance, and willingness to participate may be affected. We conducted a preregistered survey experiment with a representative sample in Germany (n=1850) to examine how information about AI-enabled deliberation influences willingness to participate and perceptions of deliberative quality. Respondents were randomly assigned to treatments that provided them information about deliberative tasks facilitated by either AI or humans. Our findings reveal a significant AI-penalty. Participants were less willing to engage in AI-facilitated deliberation and rated its quality lower than human-led formats. These effects were moderated by individual predispositions. Perceptions of AI's societal benefits and anthropomorphization of AI showed positive interaction effects on people's interest to participate in AI-enabled deliberative formats and positive quality assessments, while AI risk assessments showed negative interactions with information about AI-enabled deliberation. These results suggest AI-enabled deliberation faces substantial public skepticism, potentially even introducing a new deliberative divide. Unlike traditional participation gaps based on education or demographics, this divide is shaped by attitudes toward AI. As democratic engagement increasingly moves online, ensuring AI's role in deliberation does not discourage participation or deepen inequalities will be a key challenge for future research and policy.


Safety Takes A Backseat At Paris AI Summit, As U.S. Pushes for Less Regulation

TIME - Tech

Safety concerns are out, optimism is in: that was the takeaway from a major artificial intelligence summit in Paris this week, as leaders from the U.S., France, and beyond threw their weight behind the AI industry. Although there were divisions between major nations--the U.S. and the U.K. did not sign a final statement endorsed by 60 nations calling for an "inclusive" and "open" AI sector--the focus of the two-day meeting was markedly different from the last such gathering. Last year, in Seoul, the emphasis was on defining red-lines for the AI industry. The concern: that the technology, although holding great promise, also had the potential for great harm. The final statement made no mention of significant AI risks nor attempts to mitigate them, while in a speech on Tuesday, U.S. Vice President J.D. Vance said: "I'm not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I'm here to talk about AI opportunity."


AI Governance through Markets

Tomei, Philip Moreira, Jain, Rupal, Franklin, Matija

arXiv.org Artificial Intelligence

This paper argues that market governance mechanisms should be considered a key approach in the governance of artificial intelligence (AI), alongside traditional regulatory frameworks. While current governance approaches have predominantly focused on regulation, we contend that market-based mechanisms offer effective incentives for responsible AI development. We examine four emerging vectors of market governance: insurance, auditing, procurement, and due diligence, demonstrating how these mechanisms can affirm the relationship between AI risk and financial risk while addressing capital allocation inefficiencies. While we do not claim that market forces alone can adequately protect societal interests, we maintain that standardised AI disclosures and market mechanisms can create powerful incentives for safe and responsible AI development. This paper urges regulators, economists, and machine learning researchers to investigate and implement market-based approaches to AI governance.


AI Doomers Had Their Big Moment

The Atlantic - Technology

Helen Toner remembers when every person who worked in AI safety could fit onto a school bus. Toner hadn't yet joined OpenAI's board and hadn't yet played a crucial role in the (short-lived) firing of its CEO, Sam Altman. She was working at Open Philanthropy, a nonprofit associated with the effective-altruism movement, when she first connected with the small community of intellectuals who care about AI risk. "It was, like, 50 people," she told me recently by phone. They were more of a sci-fi-adjacent subculture than a proper discipline. The deep-learning revolution was drawing new converts to the cause.


Implications for Governance in Public Perceptions of Societal-scale AI Risks

Gruetzemacher, Ross, Pilditch, Toby D., Liang, Huigang, Manning, Christy, Gates, Vael, Moss, David, Elsey, James W. B., Sleegers, Willem W. A., Kilian, Kyle

arXiv.org Artificial Intelligence

Amid growing concerns over AI's societal risks--ranging from civilizational collapse to misinformation and systemic bias--this study explores the perceptions of AI experts and the general US registered voters on the likelihood and impact of 18 specific AI risks, alongside their policy preferences for managing these risks. While both groups favor international oversight over national or corporate governance, our survey reveals a discrepancy: voters perceive AI risks as both more likely and more impactful than experts, and also advocate for slower AI development. Specifically, our findings indicate that policy interventions may best assuage collective concerns if they attempt to more carefully balance mitigation efforts across all classes of societal-scale risks, effectively nullifying the near-vs-long-term debate over AI risks. More broadly, our results will serve not only to enable more substantive policy discussions for preventing and mitigating AI risks, but also to underscore the challenge of consensus building for effective policy implementation.