LLMs are Capable of Misaligned Behavior Under Explicit Prohibition and Surveillance

Ivanov, Igor

arXiv.org Artificial Intelligence 

In this paper, LLMs are tasked with completing an impossible quiz, while they are in a sandbox, monitored, told about these measures and instructed not to cheat. Some frontier LLMs cheat consistently and attempt to circumvent restrictions despite everything. The results reveal a fundamental tension between goal-directed behavior and alignment in current LLMs. The code and evaluation logs are available at github.com/baceolus/cheating