Bergman, Stevie
STAR: SocioTechnical Approach to Red Teaming Language Models
Weidinger, Laura, Mellor, John, Pegueroles, Bernat Guillen, Marchal, Nahema, Kumar, Ravin, Lum, Kristian, Akbulut, Canfer, Diaz, Mark, Bergman, Stevie, Rodriguez, Mikel, Rieser, Verena, Isaac, William
This research introduces STAR, a sociotechnical framework that improves on current best practices for red teaming safety of large language models. STAR makes two key contributions: it enhances steerability by generating parameterised instructions for human red teamers, leading to improved coverage of the risk surface. Parameterised instructions also provide more detailed insights into model failures at no increased cost. Second, STAR improves signal quality by matching demographics to assess harms for specific groups, resulting in more sensitive annotations. STAR further employs a novel step of arbitration to leverage diverse viewpoints and improve label reliability, treating disagreement not as noise but as a valuable contribution to signal quality.
Sociotechnical Safety Evaluation of Generative AI Systems
Weidinger, Laura, Rauh, Maribeth, Marchal, Nahema, Manzini, Arianna, Hendricks, Lisa Anne, Mateos-Garcia, Juan, Bergman, Stevie, Kay, Jackie, Griffin, Conor, Bariach, Ben, Gabriel, Iason, Rieser, Verena, Isaac, William
Generative AI systems produce a range of risks. To ensure the safety of generative AI systems, these risks must be evaluated. In this paper, we make two main contributions toward establishing such evaluations. First, we propose a three-layered framework that takes a structured, sociotechnical approach to evaluating these risks. This framework encompasses capability evaluations, which are the main current approach to safety evaluation. It then reaches further by building on system safety principles, particularly the insight that context determines whether a given capability may cause harm. To account for relevant context, our framework adds human interaction and systemic impacts as additional layers of evaluation. Second, we survey the current state of safety evaluation of generative AI systems and create a repository of existing evaluations. Three salient evaluation gaps emerge from this analysis. We propose ways forward to closing these gaps, outlining practical steps as well as roles and responsibilities for different actors. Sociotechnical safety evaluation is a tractable approach to the robust and comprehensive safety evaluation of generative AI systems.