Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models
Xu, Yue, Fu, Chengyan, Xiong, Li, Yang, Sibei, Wang, Wenjie
–arXiv.org Artificial Intelligence
Pre-training large language models (LLMs) on vast text corpora enhances natural language processing capabilities but risks encoding social biases, particularly gender bias. While parameter-modification methods like fine-tuning mitigate bias, they are resource-intensive, unsuitable for closed-source models, and lack adaptability to evolving societal norms. Instruction-based approaches offer flexibility but often compromise task performance. To address these limitations, we propose $\textit{FaIRMaker}$, an automated and model-independent framework that employs an $\textbf{auto-search and refinement}$ paradigm to adaptively generate Fairwords, which act as instructions integrated into input queries to reduce gender bias and enhance response quality. Extensive experiments demonstrate that $\textit{FaIRMaker}$ automatically searches for and dynamically refines Fairwords, effectively mitigating gender bias while preserving task integrity and ensuring compatibility with both API-based and open-source LLMs.
arXiv.org Artificial Intelligence
Feb-17-2025
- Genre:
- Research Report (0.82)
- Industry:
- Health & Medicine > Therapeutic Area (0.48)
- Leisure & Entertainment (0.46)
- Technology: