A Credal Self Supervised Learning Supplementary Material

Neural Information Processing Systems 

A.1 Algorithmic Description of CSSL Algorithm 1 provides the pseudo-code of the batch-wise loss calculation in CSSL.Algorithm 1 CSSL with adaptive precisiation α Require: For CT Augment (and later RandAugment as considered in Section A.4.2), we use the same operations Figure 1 shows the learning curves of the runs considered in the efficiency study in Section 4.3 As ground-truth, we define the true probability of the positive class by a sigmoidal shaped function. In this setting, self-training of a simple neural network with deterministic labeling leads to a flat (instead of sigmoidal) function most of the time, because the learner tends to go with the majority in the labeled training data. With probabilistic labels, the results become a bit better: the learned functions tend to be increasing but still deviates a lot from the ground-truth sigmoid. Table 3 shows the results. In the following, we call this variant UPSMatch .

Similar Docs  Excel Report  more

TitleSimilaritySource
None found