Semi-Supervised Learning with Meta-Gradient
Zhang, Xin-Yu, Jia, Hao-Lin, Xiao, Taihong, Cheng, Ming-Ming, Yang, Ming-Hsuan
In this work, we propose a simple yet effective meta-learning algorithm in the semi-supervised settings. We notice that existing consistency-based approaches mostly do not consider the essential role of the label information for consistency regularization. To alleviate this issue, we bridge the relationship between the consistency loss and label information by unfolding and differentiating through one optimization step. Specifically, we exploit the pseudo labels of the unlabeled examples which are guided by the meta-gradients of the labeled data loss so that the model can generalize well on the labeled examples. In addition, we introduce a simple first-order approximation to avoid computing higher-order derivatives and guarantee scalability. Extensive evaluations on the SVHN, CIFAR, and ImageNet datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods.
Jul-8-2020
- Country:
- North America > United States > California (0.14)
- Genre:
- Research Report > Promising Solution (0.34)
- Technology: