Long-tail learning via logit adjustment
Menon, Aditya Krishna, Jayasumana, Sadeep, Rawat, Ankit Singh, Jain, Himanshu, Veit, Andreas, Kumar, Sanjiv
Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution, wherein many labels are associated with only a few samples. This poses a challenge for generalisation on such labels, and also makes na\"ive learning biased towards dominant labels. In this paper, we present two simple modifications of standard softmax cross-entropy training to cope with these challenges. Our techniques revisit the classic idea of logit adjustment based on the label frequencies, either applied post-hoc to a trained model, or enforced in the loss during training. Such adjustment encourages a large relative margin between logits of rare versus dominant labels. These techniques unify and generalise several recent proposals in the literature, while possessing firmer statistical grounding and empirical performance.
Jul-14-2020
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > Sweden
- North America
- Canada > British Columbia
- United States
- California
- Los Angeles County > Long Beach (0.14)
- San Francisco County > San Francisco (0.14)
- Georgia > Fulton County
- Atlanta (0.04)
- Hawaii > Honolulu County
- Honolulu (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Nevada > Clark County
- Las Vegas (0.04)
- New York > New York County
- New York City (0.04)
- Wisconsin > Dane County
- Madison (0.04)
- California
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Technology: