Which Strategies Matter for Noisy Label Classification? Insight into Loss and Uncertainty

Shin, Wonyoung, Ha, Jung-Woo, Li, Shengzhe, Cho, Yongwoo, Song, Hoyean, Kwon, Sunyoung

arXiv.org Machine Learning 

Label noise is a critical factor that degrades the generalization performance of deep neural networks, thus leading to severe issues in real-world problems. Existing studies have employed strategies based on either loss or uncertainty to address noisy labels, and ironically some strategies contradict each other: emphasizing or discarding uncertain samples or concentrating on high or low loss samples. To elucidate how opposing strategies can enhance model performance and offer insights into training with noisy labels, we present analytical results on how loss and uncertainty values of samples change throughout the training process. From the in-depth analysis, we design a new robust training method that emphasizes clean and informative samples, while minimizing the influence of noise using both loss and uncertainty. We demonstrate the effectiveness of our method with extensive experiments on synthetic and real-world datasets for various deep learning models. The results show that our method significantly outperforms other state-of-the-art methods and can be used generally regardless of neural network architectures.

Duplicate Docs Excel Report

None found

Similar Docs  Excel Report  more

None found