BendVLM: Test-Time Debiasing of Vision-Language Embeddings Walter Gerych 1 Eileen Pan
–Neural Information Processing Systems
Vision-language model (VLM) embeddings have been shown to encode biases present in their training data, such as societal biases that prescribe negative characteristics to members of various racial and gender identities. VLMs are being quickly adopted for a variety of tasks ranging from few-shot classification to text-guided image generation, making debiasing VLM embeddings crucial. Debiasing approaches that fine-tune the VLM often suffer from catastrophic forgetting. On the other hand, fine-tuning-free methods typically utilize a "one-size-fits-all" approach that assumes that correlation with the spurious attribute can be explained using a single linear direction across all possible inputs.
Neural Information Processing Systems
May-25-2025, 05:03:18 GMT
- Country:
- North America > United States (0.28)
- Genre:
- Research Report > Experimental Study (0.46)
- Industry:
- Technology: