LLMs are Capable of Misaligned Behavior Under Explicit Prohibition and Surveillance
–arXiv.org Artificial Intelligence
In this paper, LLMs are tasked with completing an impossible quiz, while they are in a sandbox, monitored, told about these measures and instructed not to cheat. Some frontier LLMs cheat consistently and attempt to circumvent restrictions despite everything. The results reveal a fundamental tension between goal-directed behavior and alignment in current LLMs. The code and evaluation logs are available at github.com/baceolus/cheating
arXiv.org Artificial Intelligence
Jul-8-2025
- Genre:
- Research Report > New Finding (0.69)
- Industry:
- Information Technology > Security & Privacy (0.96)
- Technology: