Poser: Unmasking Alignment Faking LLMs by Manipulating Their Internals
Clymer, Joshua, Juang, Caden, Field, Severin
–arXiv.org Artificial Intelligence
Like a criminal under investigation, Large Language Models (LLMs) might pretend to be aligned while evaluated and misbehave when they have a good opportunity. Can current interpretability methods catch these 'alignment fakers?' To answer this question, we introduce a benchmark that consists of 324 pairs of LLMs fine-tuned to select actions in role-play scenarios. One model in each pair is consistently benign (aligned). The other model misbehaves in scenarios where it is unlikely to be caught (alignment faking). The task is to identify the alignment faking model using only inputs where the two models behave identically. We test five detection strategies, one of which identifies 98% of alignment-fakers.
arXiv.org Artificial Intelligence
May-11-2024
- Country:
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Genre:
- Research Report (0.86)
- Industry:
- Banking & Finance (1.00)
- Health & Medicine (0.68)
- Information Technology > Security & Privacy (1.00)
- Technology: