fixmatch
- Information Technology > Artificial Intelligence > Machine Learning > Unsupervised or Indirectly Supervised Learning (0.56)
- Information Technology > Artificial Intelligence > Machine Learning > Inductive Learning (0.53)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.30)
- North America > United States > California (0.04)
- Asia > China > Guangxi Province > Nanning (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Inductive Learning (0.86)
- Information Technology > Artificial Intelligence > Machine Learning > Unsupervised or Indirectly Supervised Learning (0.69)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- Asia > Japan > Honshū > Kantō > Kanagawa Prefecture (0.04)
- Asia > India > Karnataka > Bengaluru (0.04)
f7ac67a9aa8d255282de7d11391e1b69-AuthorFeedback.pdf
Inthemain6 objective, the program optimizes forΛ based on the supervised loss of the "validation" set. SSL typically uses an'unsupervised loss' to15 leverage unlabeled data. While the model may not generalize if the unsupervised loss is poorly designed, recent16 works [38, 36] empirically validate their proposed loss. Theoretical analysis of SSL has also been provided under17 various assumptions,e.g., [6, A]. Weencourage R1 to study these works which show how unsupervised losses aid18 generalization.
DP-SSL: TowardsRobustSemi-supervisedLearning withAFewLabeledSamples
However, when the size of labeled data is very small (say a few labeled samples per class), SSL performs poorly and unstably, possibly due to the low qualityoflearnedpseudolabels.Inthispaper,weproposeanewSSLmethodcalled DP-SSL that adopts an innovative data programming (DP) scheme to generate probabilistic labels for unlabeled data. Different from existing DP methods that rely on human experts to provide initial labeling functions (LFs), we develop a multiple-choice learning (MCL) based approach to automatically generate LFs fromscratchinSSLstyle. Withthenoisylabelsproduced bytheLFs,wedesign a label model to resolve the conflict and overlap among the noisy labels, and finally infer probabilistic labels for unlabeled samples.
- North America > United States (0.04)
- Africa > Ethiopia > Addis Ababa > Addis Ababa (0.04)
- North America > Canada > Ontario > Toronto (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- North America > Canada > British Columbia > Vancouver (0.04)
- (13 more...)