Pearls from Pebbles: Improved Confidence Functions for Auto-labeling
Vishwakarma, Harit, Reid, null, Chen, null, Tay, Sui Jiet, Namburi, Satya Sai Srinath, Sala, Frederic, Vinayak, Ramya Korlakai
Auto-labeling is an important family of techniques that produce labeled training sets with minimum manual labeling. A prominent variant, threshold-based auto-labeling (TBAL), works by finding a threshold on a model's confidence scores above which it can accurately label unlabeled data points. However, many models are known to produce overconfident scores, leading to poor TBAL performance. While a natural idea is to apply off-the-shelf calibration methods to alleviate the overconfidence issue, such methods still fall short. Rather than experimenting with ad-hoc choices of confidence functions, we propose a framework for studying the \emph{optimal} TBAL confidence function. We develop a tractable version of the framework to obtain \texttt{Colander} (Confidence functions for Efficient and Reliable Auto-labeling), a new post-hoc method specifically designed to maximize performance in TBAL systems. We perform an extensive empirical evaluation of our method \texttt{Colander} and compare it against methods designed for calibration. \texttt{Colander} achieves up to 60\% improvements on coverage over the baselines while maintaining auto-labeling error below $5\%$ and using the same amount of labeled data as the baselines.
Apr-24-2024
- Country:
- Asia > China
- Hong Kong (0.04)
- Europe > Spain
- Catalonia > Barcelona Province > Barcelona (0.04)
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- Wisconsin > Dane County
- Madison (0.14)
- California > San Francisco County
- Asia > China
- Genre:
- Research Report (0.82)
- Technology: