A Credal Self Supervised Learning Supplementary Material
–Neural Information Processing Systems
A.1 Algorithmic Description of CSSL Algorithm 1 provides the pseudo-code of the batch-wise loss calculation in CSSL.Algorithm 1 CSSL with adaptive precisiation α Require: For CT Augment (and later RandAugment as considered in Section A.4.2), we use the same operations Figure 1 shows the learning curves of the runs considered in the efficiency study in Section 4.3 As ground-truth, we define the true probability of the positive class by a sigmoidal shaped function. In this setting, self-training of a simple neural network with deterministic labeling leads to a flat (instead of sigmoidal) function most of the time, because the learner tends to go with the majority in the labeled training data. With probabilistic labels, the results become a bit better: the learned functions tend to be increasing but still deviates a lot from the ground-truth sigmoid. Table 3 shows the results. In the following, we call this variant UPSMatch .
Neural Information Processing Systems
Aug-15-2025, 07:20:19 GMT
- Country:
- Africa > Ethiopia
- Addis Ababa > Addis Ababa (0.04)
- North America > United States (0.04)
- Africa > Ethiopia
- Technology: