Enhancing Domain Adaptation through Prompt Gradient Alignment Lam Tran
–Neural Information Processing Systems
Prior Unsupervised Domain Adaptation (UDA) methods often aim to train a domain-invariant feature extractor, which may hinder the model from learning sufficiently discriminative features. To tackle this, a line of works based on prompt learning leverages the power of large-scale pre-trained vision-language models to learn both domain-invariant and specific features through a set of domain-agnostic and domain-specific learnable prompts. Those studies typically enforce invariant constraints on representation, output, or prompt space to learn such prompts. Differently, we cast UDA as a multiple-objective optimization problem in which each objective is represented by a domain loss. Under this new framework, we propose aligning per-objective gradients to foster consensus between them. Additionally, to prevent potential overfitting when fine-tuning this deep learning architecture, we penalize the norm of these gradients. To achieve these goals, we devise a practical gradient update procedure that can work under both single-source and multi-source UDA.
Neural Information Processing Systems
May-24-2025, 08:38:23 GMT
- Country:
- Genre:
- Research Report > Experimental Study (0.93)
- Industry:
- Education (0.45)
- Information Technology (0.45)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.48)
- Statistical Learning > Gradient Descent (0.66)
- Natural Language (1.00)
- Representation & Reasoning > Optimization (0.87)
- Vision (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence