Learning from Red Teaming: Gender Bias Provocation and Mitigation in Large Language Models
Su, Hsuan, Cheng, Cheng-Chu, Farn, Hua, Kumar, Shachi H, Sahay, Saurav, Chen, Shang-Tse, Lee, Hung-yi
–arXiv.org Artificial Intelligence
Recently, researchers have made considerable improvements in dialogue systems with the progress of large language models (LLMs) such as ChatGPT and GPT-4. These LLM-based chatbots encode the potential biases while retaining disparities that can harm humans during interactions. The traditional biases investigation methods often rely on human-written test cases. However, these test cases are usually expensive and limited. In this work, we propose a first-of-its-kind method that automatically generates test cases to detect LLMs' potential gender bias. We apply our method to three well-known LLMs and find that the generated test cases effectively identify the presence of biases. To address the biases identified, we propose a mitigation strategy that uses the generated test cases as demonstrations for in-context learning to circumvent the need for parameter fine-tuning. The experimental results show that LLMs generate fairer responses with the proposed approach.
arXiv.org Artificial Intelligence
Oct-17-2023
- Country:
- Asia (0.67)
- Europe (0.68)
- North America > United States
- Minnesota > Hennepin County > Minneapolis (0.14)
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Technology: