Misalignment or misuse? The AGI alignment tradeoff
Hellrigel-Holderbaum, Max, Dung, Leonard
–arXiv.org Artificial Intelligence
Creating systems that are aligned with our goals is seen as a leading approach to create safe and beneficial AI in both leading AI companies and the academic field of AI safety. We defend the view that misaligned AGI - future, generally intelligent (robotic) AI agents - poses catastrophic risks. At the same time, we support the view that aligned AGI creates a substantial risk of catastrophic misuse by humans. While both risks are severe and stand in tension with one another, we show that - in principle - there is room for alignment approaches which do not increase misuse risk. We then investigate how the tradeoff between misalignment and misuse looks em pirically for different technical approaches to AI alignment. Here, we argue that many current alignment techniques and foreseeable improvements thereof plausibly increase risks of catastrophic misuse. Since the impacts of AI depend on the social context, we close by discussing important social factors and suggest that to reduce the risk of a misuse catastrophe due to aligned AGI, techniques such as robustness, AI control methods and especially good governance seem essential.
arXiv.org Artificial Intelligence
Jun-5-2025
- Country:
- Genre:
- Research Report (0.64)
- Industry:
- Government (1.00)
- Health & Medicine (0.67)
- Information Technology > Security & Privacy (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (1.00)
- Machine Learning
- Neural Networks > Deep Learning (1.00)
- Reinforcement Learning (0.68)
- Natural Language > Large Language Model (1.00)
- Representation & Reasoning > Agents (0.88)
- Robots (1.00)
- Information Technology > Artificial Intelligence