Strength from Weakness: Fast Learning Using Weak Supervision
Robinson, Joshua, Jegelka, Stefanie, Sra, Suvrit
While access to large amounts of labeled data has enabled the training of big models with great successes in applied machine learning, it remains a key bottleneck. In numerous settings (e.g., scientific measurements, experiments, medicine) obtaining a large number of labels can be prohibitively expensive, error prone, or otherwise infeasible. When labels are scarce, a common alternative is to use additional sources of information: "weak labels" that contain information about the "strong" target task and are more readily available, e.g., a related task, or noisy versions of strong labels from non-experts or cheaper measurements. Such a setting is called weakly supervised learning, and given its great practical relevance it has received much attention [11, 25, 34, 43, 67]. A prominent example that enabled breakthrough results in computer vision and is now standard, is to pre-train a complex model on a related, large data task, and to then use the learned features for fine-tuning for instance the last layer on the small-data target task [15, 21, 49, 63].
Feb-19-2020
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.14)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Health & Medicine (0.46)
- Technology: