"Nuclear Deployed!": Analyzing Catastrophic Risks in Decision-making of Autonomous LLM Agents
Xu, Rongwu, Li, Xiaojian, Chen, Shuo, Xu, Wei
–arXiv.org Artificial Intelligence
Large language models (LLMs) are evolving into autonomous decision-makers, raising concerns about catastrophic risks in high-stakes scenarios, particularly in Chemical, Biological, Radiological and Nuclear (CBRN) domains. Based on the insight that such risks can originate from trade-offs between the agent's Helpful, Harmlessness and Honest (HHH) goals, we build a novel three-stage evaluation framework, which is carefully constructed to effectively and naturally expose such risks. We conduct 14,400 agentic simulations across 12 advanced LLMs, with extensive experiments and analysis. Results reveal that LLM agents can autonomously engage in catastrophic behaviors and deception, without being deliberately induced. Furthermore, stronger reasoning abilities often increase, rather than mitigate, these risks. We Figure 1: We find LLM agents can deploy catastrophic also show that these agents can violate instructions behaviors even if it has no authority and the permission and superior commands. On the whole, request is denied. It will also falsely accuse the third we empirically prove the existence of catastrophic party as a way of deception when asked by its superior.
arXiv.org Artificial Intelligence
Feb-16-2025
- Country:
- North America > United States (1.00)
- Genre:
- Research Report
- Experimental Study (0.67)
- New Finding (1.00)
- Research Report
- Industry:
- Technology: