Exploring Straightforward Conversational Red-Teaming
Kour, George, Zwerdling, Naama, Zalmanovici, Marcel, Anaby-Tavor, Ateret, Fandina, Ora Nova, Farchi, Eitan
–arXiv.org Artificial Intelligence
Large language models (LLMs) are increasingly used in business dialogue systems but they pose security and ethical risks. Multiturn conversations, where context influences the model's behavior, can be exploited to produce undesired responses. In this paper, we examine the effectiveness of utilizing off-theshelf LLMs in straightforward red-teaming approaches, where an attacker LLM aims to elicit undesired output from a target LLM, comparing both single-turn and conversational redteaming tactics. Our experiments offer insights into various usage strategies that significantly affect their performance as red teamers. They suggest that off-the-shelf models can act as effective red teamers and even adjust their attack strategy based on past attempts, although their effectiveness decreases with greater alignment. Figure 1: An example dialogue between a red-teaming Warning: This paper contains examples and model (red) and the target model (blue) in a conversational model-generated content that may be considered setting, with a judge LLM (grey) scoring the offensive.
arXiv.org Artificial Intelligence
Sep-7-2024
- Country:
- Asia > Middle East > Israel > Haifa District > Haifa (0.04)
- Genre:
- Research Report > New Finding (0.69)
- Industry:
- Government (0.88)
- Health & Medicine (1.00)
- Information Technology (0.88)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Technology: