Goto

Collaborating Authors

 imagenet-v2


We thank the reviewers for their feedback and reply to the major points raised by each reviewer individually

Neural Information Processing Systems

We thank the reviewers for their feedback and reply to the major points raised by each reviewer individually. Our paper focuses on ImageNet classification because this is what almost all prior work on robustness has studied. We hope that future work (e.g., transfer learning research) can build on our testbed. Our results are substantially more nuanced than "more data helps": (i) We show that only more data currently helps This is a strong negative result. Appendix D contains additional results for more granular trends.


Review for NeurIPS paper: Measuring Robustness to Natural Distribution Shifts in Image Classification

Neural Information Processing Systems

I have to look more deeply into this but judging from a quick read their results do indeed change my perception on the performance gap in ImageNet-V2. Nevertheless I think ObjectNet is the more obvious example and should be front and center.


Problem-dependent attention and effort in neural networks with applications to image resolution and model selection

Rohlfs, Chris

arXiv.org Artificial Intelligence

This paper introduces two new ensemble-based methods to reduce the data and computation costs of image classification. They can be used with any set of classifiers and do not require additional training. In the first approach, data usage is reduced by only analyzing a full-sized image if the model has low confidence in classifying a low-resolution pixelated version. When applied on the best performing classifiers considered here, data usage is reduced by 61.2% on MNIST, 69.6% on KMNIST, 56.3% on FashionMNIST, 84.6% on SVHN, 40.6% on ImageNet, and 27.6% on ImageNet-V2, all with a less than 5% reduction in accuracy. However, for CIFAR-10, the pixelated data are not particularly informative, and the ensemble approach increases data usage while reducing accuracy. In the second approach, compute costs are reduced by only using a complex model if a simpler model has low confidence in its classification. Computation cost is reduced by 82.1% on MNIST, 47.6% on KMNIST, 72.3% on FashionMNIST, 86.9% on SVHN, 89.2% on ImageNet, and 81.5% on ImageNet-V2, all with a less than 5% reduction in accuracy; for CIFAR-10 the corresponding improvements are smaller at 13.5%. When cost is not an object, choosing the projection from the most confident model for each observation increases validation accuracy to 81.0% from 79.3% for ImageNet and to 69.4% from 67.5% for ImageNet-V2.


Uncertainty Sets for Image Classifiers using Conformal Prediction

Angelopoulos, Anastasios, Bates, Stephen, Malik, Jitendra, Jordan, Michael I.

arXiv.org Machine Learning

Convolutional image classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, hindering their deployment in consequential settings. Existing uncertainty quantification techniques, such as Platt scaling, attempt to calibrate the network's probability estimates, but they do not have formal guarantees. We present an algorithm that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%. The algorithm is simple and fast like Platt scaling, but provides a formal finite-sample coverage guarantee for every model and dataset. Furthermore, our method generates much smaller predictive sets than alternative methods, since we introduce a regularizer to stabilize the small scores of unlikely classes after Platt scaling. In experiments on both Imagenet and Imagenet-V2 with a ResNet-152 and other classifiers, our scheme outperforms existing approaches, achieving exact coverage with sets that are often factors of 5 to 10 smaller.


Identifying Statistical Bias in Dataset Replication

Engstrom, Logan, Ilyas, Andrew, Santurkar, Shibani, Tsipras, Dimitris, Steinhardt, Jacob, Madry, Aleksander

arXiv.org Machine Learning

The primary objective of supervised learning is to develop models that generalize robustly to unseen data. Benchmark test sets provide a proxy for out-of-sample performance, but can outlive their usefulness in some cases. For example, evaluating on benchmarks alone may steer us towards models that adaptively overfit [Reu03; RFR08; Dwo 15] to the finite test set and do not generalize. Alternatively, we might select for models that are sensitive to insignificant aspects of the dataset creation process and thus do not generalize robustly (e.g., models that are sensitive to the exact set of humans who annotated the test set). To diagnose these issues, recent work has generated new, previously "unseen" testbeds for standard datasets through a process known as dataset replication. Though not yet widespread in machine learning, dataset replication is a natural analogue to experimental replication studies in the natural sciences (cf.