Benchmarking noisy label detection methods
Pickler, Henrique, Kamassury, Jorge K. S., Silva, Danilo
Label noise is a common problem in real-world datasets, affecting both model training and validation. Clean data are essential for achieving strong performance and ensuring reliable evaluation. While various techniques have been proposed to detect noisy labels, there is no clear consensus on optimal approaches. We perform a comprehensive benchmark of detection methods by decomposing them into three fundamental components: label agreement function, aggregation method, and information gathering approach (in-sample vs out-of-sample). This decomposition can be applied to many existing detection methods, and enables systematic comparison across diverse approaches. To fairly compare methods, we propose a unified benchmark task, detecting a fraction of training samples equal to the dataset's noise rate. We also introduce a novel metric: the false negative rate at this fixed operating point. We identify that in-sample information gathering using average probability aggregation combined with the logit margin as the label agreement function achieves the best results across most scenarios. Our findings provide practical guidance for designing new detection methods and selecting techniques for specific applications. Keywords: Noisy label detection, Noisy labels, Dataset cleaning, Data quality, Benchmark, Neural networks 1. Introduction Most supervised learning methods assume a perfectly labeled dataset. However, training data often contain incorrectly labeled instances. Even large, standard benchmark datasets, such as CIFAR, ImageNet, and MS-COCO, are known to have noisy labels [1, 2].
Oct-21-2025
- Country:
- Europe > Switzerland (0.04)
- North America > United States
- New York > New York County > New York City (0.04)
- South America > Brazil
- Santa Catarina > Florianópolis (0.04)
- Genre:
- Research Report > New Finding (0.87)
- Technology: