Contrastive Credibility Propagation for Reliable Semi-Supervised Learning

Kutt, Brody, Ramteke, Pralay, Mignot, Xavier, Toman, Pamela, Ramanan, Nandini, Chhetri, Sujit Rokka, Huang, Shan, Du, Min, Hewlett, William

arXiv.org Artificial Intelligence 

Consequently, such systems necessitate external components like Out-of-Distribution (OOD) A fundamental goal of semi-supervised learning (SSL) is to detectors to prevent failures, albeit at the cost of increased ensure the use of unlabeled data results in a classifier that outperforms complexity. Instead of maximizing the robustness to any one a baseline trained only on labeled data (supervised data variable, we strive to build an SSL algorithm that is baseline). However, this is often not the case (Oliver et al. robust to all data variables, i.e. can match or outperform a 2018). The problem is often overlooked as SSL algorithms supervised baseline. To address this challenge, we first hypothesize are frequently evaluated only on clean and balanced datasets that sensitivity to pseudo-label errors is the root where the sole experimental variable is the number of given cause of all failures. This rationale is based on the simple labels. Worse, in the pursuit of maximizing label efficiency, fact that a hypothetical SSL algorithm consisting of a pseudolabeler many modern SSL algorithms such as (Berthelot et al. 2019; with a rejection option and means to build a classifier Sohn et al. 2020; Zheng et al. 2022; Li, Xiong, and Hoi 2021) could always match or outperform its supervised baseline if and others rely on a mechanism that directly encourages the the pseudo-labeler made no mistakes. Such a pseudo-labeler marginal distribution of label predictions to be close to the is unrealistic, of course. Instead, we build into our solution marginal distribution of ground truth labels (known as distribution means to work around those inevitable errors.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found