Weakly Supervised Label Learning Flows
Lu, You, Arachie, Chidubem, Huang, Bert
–arXiv.org Artificial Intelligence
Supervised learning usually requires a large amount of labelled data. However, attaining ground-truth labels is costly for many tasks. Alternatively, weakly supervised methods learn with cheap weak signals that only approximately label some data. Many existing weakly supervised learning methods learn a deterministic function that estimates labels given the input data and weak signals. In this paper, we develop label learning flows (LLF), a general framework for weakly supervised learning problems. Our method is a generative model based on normalizing flows. The main idea of LLF is to optimize the conditional likelihoods of all possible labelings of the data within a constrained space defined by weak signals. We develop a training method for LLF that trains the conditional flow inversely and avoids estimating the labels. Once a model is trained, we can make predictions with a sampling algorithm. We apply LLF to three weakly supervised learning problems. Experiment results show that our method outperforms many baselines we compare against.
arXiv.org Artificial Intelligence
Feb-19-2023
- Country:
- North America > United States (0.28)
- Genre:
- Research Report > New Finding (0.88)
- Industry:
- Education > Focused Education > Special Education (0.45)
- Technology: