Eliciting Secret Knowledge from Language Models
Cywiński, Bartosz, Ryd, Emil, Wang, Rowan, Rajamanoharan, Senthooran, Nanda, Neel, Conmy, Arthur, Marks, Samuel
–arXiv.org Artificial Intelligence
Model Organisms (MOs) research involves intentionally training models to exhibit specific failure modes, to serve as a testbed for study and development of mitigations (Hubinger et al., 2024; Denison et al., 2024; Marks et al., 2025). Prior work has introduced several types of MOs, including models that conceal capabilities unless a specific trigger is present in the input (Greenblatt et al., 2024b; van der Weij et al., 2025), fake alignment to evade safety measures (Greenblatt et al., 2024a), and display broad misalignment after being fine-tuned on a narrow distribution of harmful data (Bet-ley et al., 2025). The secret-keeping models trained in this work represent a novel class of MOs that refrain from revealing that they have certain factual knowledge. Auditing Language Models Our work contributes to the growing field of alignment auditing, which aims to systematically investigate whether a model pursues undesired or hidden objectives, rather than merely evaluating its surface-level behavior (Casper et al., 2024). A central methodology for validating such audits is to construct a testbed with a known ground truth, a principle applied in prior work (Schwettmann et al., 2023; Rager et al., 2025).
arXiv.org Artificial Intelligence
Nov-3-2025
- Country:
- Europe
- Poland > Masovia Province
- Warsaw (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.04)
- Poland > Masovia Province
- Oceania > Australia (0.04)
- Europe
- Genre:
- Research Report (1.00)
- Industry:
- Education > Health & Safety
- School Nutrition (0.46)
- Health & Medicine (1.00)
- Education > Health & Safety
- Technology: