Semi-Supervised Learning with Meta-Gradient

Zhang, Xin-Yu, Jia, Hao-Lin, Xiao, Taihong, Cheng, Ming-Ming, Yang, Ming-Hsuan

arXiv.org Machine Learning 

In this work, we propose a simple yet effective meta-learning algorithm in the semi-supervised settings. We notice that existing consistency-based approaches mostly do not consider the essential role of the label information for consistency regularization. To alleviate this issue, we bridge the relationship between the consistency loss and label information by unfolding and differentiating through one optimization step. Specifically, we exploit the pseudo labels of the unlabeled examples which are guided by the meta-gradients of the labeled data loss so that the model can generalize well on the labeled examples. In addition, we introduce a simple first-order approximation to avoid computing higher-order derivatives and guarantee scalability. Extensive evaluations on the SVHN, CIFAR, and ImageNet datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found