PEFTDebias : Capturing debiasing information using PEFTs
Agarwal, Sumit, Veerubhotla, Aditya Srikanth, Bansal, Srijan
–arXiv.org Artificial Intelligence
The increasing use of foundation models highlights the urgent need to address and eliminate implicit biases present in them that arise during pretraining. In this paper, we introduce PEFTDebias, a novel approach that employs parameter-efficient fine-tuning (PEFT) to mitigate the biases within foundation models. PEFTDebias consists of two main phases: an upstream phase for acquiring debiasing parameters along a specific bias axis, and a downstream phase where these parameters are incorporated into the model and frozen during the fine-tuning process. By evaluating on four datasets across two bias axes namely gender and race, we find that downstream biases can be effectively reduced with PEFTs. In addition, we show that these parameters possess axis-specific debiasing characteristics, enabling their effective transferability in mitigating biases in various downstream tasks. To ensure reproducibility, we release the code to do our experiments.
arXiv.org Artificial Intelligence
Dec-1-2023
- Country:
- Europe (1.00)
- North America > United States
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Pennsylvania > Allegheny County
- Pittsburgh (0.14)
- Minnesota > Hennepin County
- Genre:
- Research Report (1.00)
- Technology: