Goto

Collaborating Authors

Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks

Neural Information Processing Systems

Collecting large training datasets, annotated with high-quality labels, is costly and time-consuming. This paper proposes a novel framework for training deep convolutional neural networks from noisy labeled datasets that can be obtained cheaply. The problem is formulated using an undirected graphical model that represents the relationship between noisy and clean labels, trained in a semi-supervised setting. In our formulation, the inference over latent clean labels is tractable and is regularized during training using auxiliary sources of information. The proposed model is applied to the image labeling problem and is shown to be effective in labeling unseen images as well as reducing label noise in training on CIFAR-10 and MS COCO datasets.


Using noise resilience for ranking generalization of deep neural networks

arXiv.org Machine Learning

Recent papers have shown that sufficiently overparameterized neural networks can perfectly fit even random labels. Thus, it is crucial to understand the underlying reason behind the generalization performance of a network on real-world data. In this work, we propose several measures to predict the generalization error of a network given the training data and its parameters. Using one of these measures, based on noise resilience of the network, we secured 5th position in the predicting generalization in deep learning (PGDL) competition at NeurIPS 2020.


Synthetic vs Real: Deep Learning on Controlled Noise

arXiv.org Machine Learning

A BSTRACT Performing controlled experiments on noisy data is essential in thoroughly understanding deep learning across a spectrum of noise levels. Due to the lack of suitable datasets, previous research have only examined deep learning on controlled synthetic noise, and real-world noise has never been systematically studied in a controlled setting. To this end, this paper establishes a benchmark of real-world noisy labels at 10 controlled noise levels. As real-world noise possesses unique properties, to understand the difference, we conduct a large-scale study across a variety of noise levels and types, architectures, methods, and training settings. Our study shows that: (1) Deep Neural Networks (DNNs) generalize much better on real-world noise. We hope our benchmark, as well as our findings, will facilitate deep learning research on noisy data. 1 I NTRODUCTION Y ou take the blue pill you wake up in your bed and believe whatever you want to believe. Y ou take the red pill and I show you how deep the rabbit hole goes. Remember, all I'm offering is the truth. Morpheus (The Matrix 1999) Deep Neural Networks (DNNs) trained on noisy data demonstrate intriguing properties. For example, DNNs are capable of memorizing completely random training labels but generalize poorly on clean test data Zhang et al. (2017). When trained with stochastic gradient descent, DNNs learn patterns first before memorizing the label noise Arpit et al. (2017). These findings inspired recent research on noisy data. As training data are usually noisy, the fact that DNNs are able to memorize the noisy labels highlights the importance of deep learning research on noisy data. To study DNNs on noisy data, previous work often performs controlled experiments by injecting a series of synthetic noises into a well-annotated dataset. The noise level p may vary in the range of 0%- 100%, where p 0% is the clean dataset whereas p 100% represents the dataset of zero correct labels.


Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem

arXiv.org Machine Learning

Classifiers used in the wild, in particular for safety-critical systems, should not only have good generalization properties but also should know when they don't know, in particular make low confidence predictions far away from the training data. We show that ReLU type neural networks which yield a piecewise linear classifier function fail in this regard as they produce almost always high confidence predictions far away from the training data. For bounded domains like images we propose a new robust optimization technique similar to adversarial training which enforces low confidence predictions far away from the training data. We show that this technique is surprisingly effective in reducing the confidence of predictions far away from the training data while maintaining high confidence predictions and similar test error on the original classification task compared to standard training.


On the Resistance of Neural Nets to Label Noise

arXiv.org Machine Learning

We investigate the behavior of convolutional neural networks (CNN) in the presence of label noise. We show empirically that CNN prediction for a given test sample depends on the labels of the training samples in its local neighborhood. This is similar to the way that the K-nearest neighbors (K-NN) classifier works. With this understanding, we derive an analytical expression for the expected accuracy of a K-NN, and hence a CNN, classifier for any level of noise. In particular, we show that K-NN, and CNN, are resistant to label noise that is randomly spread across the training set, but are very sensitive to label noise that is concentrated. Experiments on real datasets validate our analytical expression by showing that they match the empirical results for varying degrees of label noise.