Goto

Collaborating Authors

 distribution alignment


AdversarialReweightingforPartial DomainAdaptation

Neural Information Processing Systems

Theconventional closed-set DAmethods generally assume that the source and target domains share the same label space. However, this assumption is often not realistic in practice.


Align Y our Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization

Neural Information Processing Systems

TPT does not explicitly align the pre-trained CLIP to become aware of the test sample distribution. For the effective test-time adaptation of V -L foundation models, it is crucial to bridge the distribution gap between the pre-training dataset and the downstream evaluation set for high zero-shot generalization.



CooperativeDistributionAlignment viaJSDUpperBound

Neural Information Processing Systems

Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to ashared aligned distribution given only samples from each distribution. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning.




1df282080150537df7b00c20aadcafad-Paper-Conference.pdf

Neural Information Processing Systems

Inthis paper,we first investigate twokinds oftrivial solutions in the compositional generation process, and demonstrate their source isvanishing gradients onthemask.



06964dce9addb1c5cb5d6e3d9838f733-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for their feedback. We will reflect reviewer's comments and our response in the revision. Reviewers showed concern on the novelty and the accuracy. DA is more effective when the task is more challenging. On the other hand, we find DA effective as well when the amount of labeled data is small.


Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization

Neural Information Processing Systems

The promising zero-shot generalization of vision-language models such as CLIP has led to their adoption using prompt learning for numerous downstream tasks. Previous works have shown test-time prompt tuning using entropy minimization to adapt text prompts for unseen domains. While effective, this overlooks the key cause for performance degradation to unseen domains -- distribution shift. In this work, we explicitly handle this problem by aligning the out-of-distribution (OOD) test sample statistics to those of the source data using prompt tuning. We use a single test sample to adapt multi-modal prompts at test time by minimizing the feature distribution shift to bridge the gap in the test domain. Evaluating against the domain generalization benchmark, our method improves zero-shot top-1 accuracy beyond existing prompt-learning techniques, with a 3.08% improvement over the baseline MaPLe. In cross-dataset generalization with unseen categories across 10 datasets, our method improves consistently across all datasets compared to the existing state-of-the-art.