Goto

Collaborating Authors

An Introduction to Confident Learning: Finding and Learning with Label Errors in Datasets

#artificialintelligence

This post overviews the paper Confident Learning: Estimating Uncertainty in Dataset Labels authored by Curtis G. Northcutt, Lu Jiang, and Isaac L. Chuang. If you've ever used datasets like CIFAR, MNIST, ImageNet, or IMDB, you likely assumed the class labels are correct. Why? Principled approaches for characterizing and finding label errors in massive datasets is challenging and solutions are limited. Surprise: there are likely at least 100,000 label issues in ImageNet. In this post, I discuss an emerging, principled framework to identify label errors, characterize label noise, and learn with noisy labels known as confident learning (CL), open-sourced as the cleanlab Python package.


Best of arXiv.org for AI, Machine Learning, and Deep Learning – October 2019 - insideBIGDATA

#artificialintelligence

Researchers from all over the world contribute to this repository as a prelude to the peer review process for publication in traditional journals. We hope to save you some time by picking out articles that represent the most promise for the typical data scientist. The articles listed below represent a fraction of all articles appearing on the preprint server. They are listed in no particular order with a link to each paper along with a brief overview. Especially relevant articles are marked with a "thumbs up" icon.


New research highlights how error-ridden data used to train AI is - RealKM

#artificialintelligence

Originally posted on The Horizons Tracker. The world is awash with data, and it's tempting to think that this data is what's used to train the AI systems that are increasingly prevalent around the world. New research1 from MIT highlights how not only is AI often trained on relatively small samples of curated data, but this data often contains errors that undermine the training delivered to machine learning algorithms. Indeed, across 10 of the most-cited datasets used by scientists to train machine learning systems, the researchers found that 3% of the data was mislabeled or inaccurate. It has long been suspected that the data used to train AI systems is not what it could be, but until now no one has been able to quantify just how poor it is.


Confident Learning: Estimating Uncertainty in Dataset Labels

arXiv.org Machine Learning

Learning exists in the context of data, yet notions of $\textit{confidence}$ typically focus on model predictions, not label quality. Confident learning (CL) has emerged as an approach for characterizing, identifying, and learning with noisy labels in datasets, based on the principles of pruning noisy data, counting to estimate noise, and ranking examples to train with confidence. Here, we generalize CL, building on the assumption of a classification noise process, to directly estimate the joint distribution between noisy (given) labels and uncorrupted (unknown) labels. This generalized CL, open-sourced as $\texttt{cleanlab}$, is provably consistent under reasonable conditions, and experimentally performant on ImageNet and CIFAR, outperforming recent approaches, e.g. MentorNet, by $30\%$ or more, when label noise is non-uniform. $\texttt{cleanlab}$ also quantifies ontological class overlap, and can increase model accuracy (e.g. ResNet) by providing clean data for training.


Confident Learning: Estimating Uncertainty in Dataset Labels

Journal of Artificial Intelligence Research

Learning exists in the context of data, yet notions of confidence typically focus on model predictions, not label quality. Confident learning (CL) is an alternative approach which focuses instead on label quality by characterizing and identifying label errors in datasets, based on the principles of pruning noisy data, counting with probabilistic thresholds to estimate noise, and ranking examples to train with confidence. Whereas numerous studies have developed these principles independently, here, we combine them, building on the assumption of a class-conditional noise process to directly estimate the joint distribution between noisy (given) labels and uncorrupted (unknown) labels. This results in a generalized CL which is provably consistent and experimentally performant. We present sufficient conditions where CL exactly finds label errors, and show CL performance exceeding seven recent competitive approaches for learning with noisy labels on the CIFAR dataset. Uniquely, the CL framework is not coupled to a specific data modality or model (e.g., we use CL to find several label errors in the presumed error-free MNIST dataset and improve sentiment classification on text data in Amazon Reviews). We also employ CL on ImageNet to quantify ontological class overlap (e.g., estimating 645 missile images are mislabeled as their parent class projectile), and moderately increase model accuracy (e.g., for ResNet) by cleaning data prior to training. These results are replicable using the open-source cleanlab release.