Exploring Straightforward Conversational Red-Teaming

Kour, George, Zwerdling, Naama, Zalmanovici, Marcel, Anaby-Tavor, Ateret, Fandina, Ora Nova, Farchi, Eitan

arXiv.org Artificial Intelligence 

Large language models (LLMs) are increasingly used in business dialogue systems but they pose security and ethical risks. Multiturn conversations, where context influences the model's behavior, can be exploited to produce undesired responses. In this paper, we examine the effectiveness of utilizing off-theshelf LLMs in straightforward red-teaming approaches, where an attacker LLM aims to elicit undesired output from a target LLM, comparing both single-turn and conversational redteaming tactics. Our experiments offer insights into various usage strategies that significantly affect their performance as red teamers. They suggest that off-the-shelf models can act as effective red teamers and even adjust their attack strategy based on past attempts, although their effectiveness decreases with greater alignment. Figure 1: An example dialogue between a red-teaming Warning: This paper contains examples and model (red) and the target model (blue) in a conversational model-generated content that may be considered setting, with a judge LLM (grey) scoring the offensive.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found