Learning with Complementary Labels Revisited: A Consistent Approach via Negative-Unlabeled Learning
Wang, Wei, Ishida, Takashi, Zhang, Yu-Jie, Niu, Gang, Sugiyama, Masashi
–arXiv.org Artificial Intelligence
Deep learning and its applications have achieved great success in recent years. However, to achieve good performance, large amounts of training data with accurate labels are required, which may not be satisfied in some real-world scenarios. Due to the effectiveness in reducing the cost and effort of labeling while maintaining comparable performance, various weakly supervised learning problems have been investigated in recent years, including semi-supervised learning [Berthelot et al., 2019], noisy-label learning [Patrini et al., 2017], programmatic weak supervision [Zhang et al., 2021a], positive-unlabeled learning [Bekker and Davis, 2020], similarity-based classification [Hsu et al., 2019], and partial-label learning [Wang et al., 2022]. Complementary-label learning is another weakly supervised learning problem that has received a lot of attention recently [Ishida et al., 2017]. In complementary-label learning, we are given training data associated with complementary labels that specify the classes to which the examples do not belong. The task is to learn a multi-class classifier that assigns correct labels to ordinary-label testing data. Collecting training data with complementary labels is much easier and cheaper than collecting ordinary-label data. For example, when asking workers on crowdsourcing platforms to annotate training data, we only need to randomly select a candidate label and then ask them whether the example belongs to that class or not.
arXiv.org Artificial Intelligence
Nov-26-2023
- Genre:
- Research Report (0.64)
- Industry:
- Education > Focused Education > Special Education (0.45)
- Technology: