Souly, Alexandra
Fundamental Limitations in Defending LLM Finetuning APIs
Davies, Xander, Winsor, Eric, Korbak, Tomek, Souly, Alexandra, Kirk, Robert, de Witt, Christian Schroeder, Gal, Yarin
LLM developers have imposed technical interventions to prevent fine-tuning misuse attacks, attacks where adversaries evade safeguards by fine-tuning the model using a public API. Previous work has established several successful attacks against specific fine-tuning API defences. In this work, we show that defences of fine-tuning APIs that seek to detect individual harmful training or inference samples ('pointwise' detection) are fundamentally limited in their ability to prevent fine-tuning attacks. We construct 'pointwise-undetectable' attacks that repurpose entropy in benign model outputs (e.g. semantic or syntactic variations) to covertly transmit dangerous knowledge. Our attacks are composed solely of unsuspicious benign samples that can be collected from the model before fine-tuning, meaning training and inference samples are all individually benign and low-perplexity. We test our attacks against the OpenAI fine-tuning API, finding they succeed in eliciting answers to harmful multiple-choice questions, and that they evade an enhanced monitoring system we design that successfully detects other fine-tuning attacks. We encourage the community to develop defences that tackle the fundamental limitations we uncover in pointwise fine-tuning API defences.
AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents
Andriushchenko, Maksym, Souly, Alexandra, Dziemian, Mateusz, Duenas, Derek, Lin, Maxwell, Wang, Justin, Hendrycks, Dan, Zou, Andy, Kolter, Zico, Fredrikson, Matt, Winsor, Eric, Wynne, Jerome, Gal, Yarin, Davies, Xander
The robustness of LLMs to jailbreak attacks, where users design prompts to circumvent safety measures and misuse model capabilities, has been studied primarily for LLMs acting as simple chatbots. Meanwhile, LLM agents--which use external tools and can execute multi-stage tasks--may pose a greater risk if misused, but their robustness remains underexplored. To facilitate research on LLM agent misuse, we propose a new benchmark called AgentHarm. The benchmark includes a diverse set of 110 explicitly malicious agent tasks (440 with augmentations), covering 11 harm categories including fraud, cybercrime, and harassment. In addition to measuring whether models refuse harmful agentic requests, scoring well on AgentHarm requires jailbroken agents to maintain their capabilities following an attack to complete a multi-step task. We evaluate a range of leading LLMs, and find (1) leading LLMs are surprisingly compliant with malicious agent requests without jailbreaking, (2) simple universal jailbreak templates can be adapted to effectively jailbreak agents, and (3) these jailbreaks enable coherent and malicious multi-step agent behavior and retain model capabilities. To enable simple and reliable evaluation of attacks and defenses for LLM-based agents, we publicly release AgentHarm at https://huggingface.co/datasets/ ai-safety-institute/AgentHarm. Warning: This work contains content that may be considered harmful or offensive. The adversarial robustness of LLMs has been studied almost exclusively in settings where LLMs act as chatbots, with the goal of extracting answers to harmful questions like "How do I make a pipe bomb?". However, LLMs may pose a greater misuse risk in the form agents directed towards harmful tasks, such as "Order online all necessary ingredients to make a pipe bomb and get them delivered to my home without getting flagged by authorities". Moreover, since recent work has found single-turn robustness does not necessarily transfer to multi-turn robustness (Li et al., 2024; Gibbs et al., 2024), robustness to the single-turn chatbot setting may have limited implications for robustness in the agent setting which is inherently multi-step. Systems like ChatGPT already offer LLMs with tool integration--such as web search and code interpreter--to millions of users, and specialised LLM agents have been developed in domains like chemistry (Bran et al., 2023; Boiko et al., 2023) and software engineering (Wang et al., 2024). Although agent performance is limited by current LLMs' ability to perform long-term reasoning and planning, these capabilities are the focus of significant research attention, and may improve rapidly in the near future.
A StrongREJECT for Empty Jailbreaks
Souly, Alexandra, Lu, Qingyuan, Bowen, Dillon, Trinh, Tu, Hsieh, Elvis, Pandey, Sana, Abbeel, Pieter, Svegliato, Justin, Emmons, Scott, Watkins, Olivia, Toyer, Sam
The rise of large language models (LLMs) has drawn attention to the existence of "jailbreaks" that allow the models to be used maliciously. However, there is no standard benchmark for measuring the severity of a jailbreak, leaving authors of jailbreak papers to create their own. We show that these benchmarks often include vague or unanswerable questions and use grading criteria that are biased towards overestimating the misuse potential of low-quality model responses. Some jailbreak techniques make the problem worse by decreasing the quality of model responses even on benign questions: we show that several jailbreaking techniques substantially reduce the zero-shot performance of GPT-4 on MMLU. Jailbreaks can also make it harder to elicit harmful responses from an "uncensored" open-source model. We present a new benchmark, StrongREJECT, which better discriminates between effective and ineffective jailbreaks by using a higher-quality question set and a more accurate response grading algorithm. We show that our new grading scheme better accords with human judgment of response quality and overall jailbreak effectiveness, especially on the sort of low-quality responses that contribute the most to over-estimation of jailbreak performance on existing benchmarks. We release our code and data at https://github.com/alexandrasouly/strongreject.
Leading the Pack: N-player Opponent Shaping
Souly, Alexandra, Willi, Timon, Khan, Akbir, Kirk, Robert, Lu, Chris, Grefenstette, Edward, Rocktäschel, Tim
Reinforcement learning solutions have great success in the 2-player general sum setting. In this setting, the paradigm of Opponent Shaping (OS), in which agents account for the learning of their co-players, has led to agents which are able to avoid collectively bad outcomes, whilst also maximizing their reward. These methods have currently been limited to 2-player game. However, the real world involves interactions with many more agents, with interactions on both local and global scales. In this paper, we extend Opponent Shaping (OS) methods to environments involving multiple co-players and multiple shaping agents. We evaluate on over 4 different environments, varying the number of players from 3 to 5, and demonstrate that model-based OS methods converge to equilibrium with better global welfare than naive learning. However, we find that when playing with a large number of co-players, OS methods' relative performance reduces, suggesting that in the limit OS methods may not perform well. Finally, we explore scenarios where more than one OS method is present, noticing that within games requiring a majority of cooperating agents, OS methods converge to outcomes with poor global welfare.
JaxMARL: Multi-Agent RL Environments in JAX
Rutherford, Alexander, Ellis, Benjamin, Gallici, Matteo, Cook, Jonathan, Lupu, Andrei, Ingvarsson, Gardar, Willi, Timon, Khan, Akbir, de Witt, Christian Schroeder, Souly, Alexandra, Bandyopadhyay, Saptarashmi, Samvelyan, Mikayel, Jiang, Minqi, Lange, Robert Tjarko, Whiteson, Shimon, Lacerda, Bruno, Hawes, Nick, Rocktaschel, Tim, Lu, Chris, Foerster, Jakob Nicolaus
Benchmarks play an important role in the development of machine learning algorithms. For example, research in reinforcement learning (RL) has been heavily influenced by available environments and benchmarks. However, RL environments are traditionally run on the CPU, limiting their scalability with typical academic compute. Recent advancements in JAX have enabled the wider use of hardware acceleration to overcome these computational hurdles, enabling massively parallel RL training pipelines and environments. This is particularly useful for multi-agent reinforcement learning (MARL) research. First of all, multiple agents must be considered at each environment step, adding computational burden, and secondly, the sample complexity is increased due to non-stationarity, decentralised partial observability, or other MARL challenges. In this paper, we present JaxMARL, the first open-source code base that combines ease-of-use with GPU enabled efficiency, and supports a large number of commonly used MARL environments as well as popular baseline algorithms. When considering wall clock time, our experiments show that per-run our JAX-based training pipeline is up to 12500x faster than existing approaches. This enables efficient and thorough evaluations, with the potential to alleviate the evaluation crisis of the field. We also introduce and benchmark SMAX, a vectorised, simplified version of the popular StarCraft Multi-Agent Challenge, which removes the need to run the StarCraft II game engine. This not only enables GPU acceleration, but also provides a more flexible MARL environment, unlocking the potential for self-play, meta-learning, and other future applications in MARL. We provide code at https://github.com/flairox/jaxmarl.