Gerovitch, Michael
The AI Agent Index
Casper, Stephen, Bailey, Luke, Hunter, Rosco, Ezell, Carson, Cabalé, Emma, Gerovitch, Michael, Slocum, Stewart, Wei, Kevin, Jurkovic, Nikola, Khan, Ariba, Christoffersen, Phillip J. K., Ozisik, A. Pinar, Trivedi, Rakshit, Hadfield-Menell, Dylan, Kolt, Noam
Leading AI developers and startups are increasingly deploying agentic AI systems that can plan and execute complex tasks with limited human involvement. However, there is currently no structured framework for documenting the technical components, intended uses, and safety features of agentic systems. To fill this gap, we introduce the AI Agent Index, the first public database to document information about currently deployed agentic AI systems. For each system that meets the criteria for inclusion in the index, we document the system's components (e.g., base model, reasoning implementation, tool use), application domains (e.g., computer use, software engineering), and risk management practices (e.g., evaluation results, guardrails), based on publicly available information and correspondence with developers. We find that while developers generally provide ample information regarding the capabilities and applications of agentic systems, they currently provide limited information regarding safety and risk management practices. The AI Agent Index is available online at https://aiagentindex.mit.edu/
An Assessment of Model-On-Model Deception
Heitkoetter, Julius, Gerovitch, Michael, Newhouse, Laker
The trustworthiness of highly capable language models is put at risk when they are able to produce deceptive outputs. Moreover, when models are vulnerable to deception it undermines reliability. In this paper, we introduce a method to investigate complex, model-on-model deceptive scenarios. We create a dataset of over 10,000 misleading explanations by asking Llama-2 7B, 13B, 70B, and GPT-3.5 to justify the wrong answer for questions in the MMLU. We find that, when models read these explanations, they are all significantly deceived. Worryingly, models of all capabilities are successful at misleading others, while more capable models are only slightly better at resisting deception. We recommend the development of techniques to detect and defend against deception. Since the release of OpenAI's ChatGPT, large language models (LLMs) have revolutionized information accessibility by providing precise answers and supportive explanations to complex queries (Spatharioti et al., 2023; Caramancion, 2024; OpenAI, 2022). However, LLMs have also demonstrated a propensity to hallucinate explanations that are convincing but incorrect (Zhang et al., 2023; Walters & Wilder, 2023; Xu et al., 2024).
Black-Box Access is Insufficient for Rigorous AI Audits
Casper, Stephen, Ezell, Carson, Siegmann, Charlotte, Kolt, Noam, Curtis, Taylor Lynn, Bucknall, Benjamin, Haupt, Andreas, Wei, Kevin, Scheurer, Jérémy, Hobbhahn, Marius, Sharkey, Lee, Krishna, Satyapriya, Von Hagen, Marvin, Alberti, Silas, Chan, Alan, Sun, Qinyi, Gerovitch, Michael, Bau, David, Tegmark, Max, Krueger, David, Hadfield-Menell, Dylan
External audits of AI systems are increasingly recognized as a key mechanism for AI governance. The effectiveness of an audit, however, depends on the degree of system access granted to auditors. Recent audits of state-of-the-art AI systems have primarily relied on black-box access, in which auditors can only query the system and observe its outputs. However, white-box access to the system's inner workings (e.g., weights, activations, gradients) allows an auditor to perform stronger attacks, more thoroughly interpret models, and conduct fine-tuning. Meanwhile, outside-the-box access to its training and deployment information (e.g., methodology, code, documentation, hyperparameters, data, deployment details, findings from internal evaluations) allows for auditors to scrutinize the development process and design more targeted evaluations. In this paper, we examine the limitations of black-box audits and the advantages of white- and outside-the-box audits. We also discuss technical, physical, and legal safeguards for performing these audits with minimal security risks. Given that different forms of access can lead to very different levels of evaluation, we conclude that (1) transparency regarding the access and methods used by auditors is necessary to properly interpret audit results, and (2) white- and outside-the-box access allow for substantially more scrutiny than black-box access alone.