Goto

Collaborating Authors

 domainnet





UnifiedOptimalTransportFrameworkforUniversal DomainAdaptation (SupplementaryMaterial)

Neural Information Processing Systems

Recall measures the fraction ofcommon samples that are retrievedascorrect common class, while specificity measures thefraction ofprivatesamples thatarenotretrieved. Fig. S1(b) shows the sensitivity ofγ, where γ is the rough boundary for splitting positive and negative in adaptive filling. For the cosine similarity of two ℓ2-normalized features, the similarity value is limited from 1to1, where higher value indicates higher similarity. Suchself-supervisedlearning methods encourage the consistency between two augmentations of one image. The display images for source prototypes are chosen by finding the nearest source instance of the prototype.



SupplementalMaterialforAdaptingSelf-Supervised VisionTransformersbyProbing Attention-ConditionedMaskingConsistency

Neural Information Processing Systems

To compare thequality oftargetsamples being selected fortraining, wemeasure reliability precision (howmanyofthe selected target samples were actually predicted correctly?) We report expected calibration error (ECE [7]), lower is better. We separately visualize features before and after in-domain pretraining with MAE 7and DINO 8. Wenote that these features are completely selfsupervised as the model has not seen task labels yet. Regardless, we observe a small degree of taskdiscriminativeness (examples ofthesame class areclustered together) anddomain invariance (examples of the same class but different domains are close) before additional pretraining. We now measure the degree of label overlap between ImageNet-22K and these 3 benchmarks.