PRIDE -- Parameter-Efficient Reduction of Identity Discrimination for Equality in LLMs
Menke, Maluna, Hagendorff, Thilo
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) frequently reproduce the gender- and sexual-identity prejudices embedded in their training corpora, leading to outputs that marginalize LGBTQIA+ users. Hence, reducing such biases is of great importance. To achieve this, we evaluate two parameter-efficient fine-tuning (PEFT) techniques - Low-Rank Adaptation (LoRA) and soft-prompt tuning - as lightweight alternatives to full-model fine-tuning for mitigating such biases. Using the WinoQueer benchmark, we quantify bias in three open-source LLMs and observe baseline bias scores reaching up to 98 (out of 100) across a range of queer identities defined by gender and/or sexual orientation, where 50 would indicate neutrality. Fine-tuning with LoRA (< 0.1% additional parameters) on a curated QueerNews corpus reduces those scores by up to 50 points and raises neutrality from virtually 0% to as much as 36%. Soft-prompt tuning (10 virtual tokens) delivers only marginal improvements. These findings show that LoRA can deliver meaningful fairness gains with minimal computation. We advocate broader adoption of community-informed PEFT, the creation of larger queer-authored corpora, and richer evaluation suites beyond WinoQueer, coupled with ongoing audits to keep LLMs inclusive.
arXiv.org Artificial Intelligence
Jul-21-2025
- Country:
- Europe > Germany
- Baden-Württemberg > Stuttgart Region > Stuttgart (0.04)
- North America > United States
- Virginia (0.04)
- Europe > Germany
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Law (0.47)
- Technology: