The Visual Task Adaptation Benchmark
Zhai, Xiaohua, Puigcerver, Joan, Kolesnikov, Alexander, Ruyssen, Pierre, Riquelme, Carlos, Lucic, Mario, Djolonga, Josip, Pinto, Andre Susano, Neumann, Maxim, Dosovitskiy, Alexey, Beyer, Lucas, Bachem, Olivier, Tschannen, Michael, Michalski, Marcin, Bousquet, Olivier, Gelly, Sylvain, Houlsby, Neil
Representation learning promises to unlock deep learning for the long tail of vision tasks without expansive labelled datasets. Y et, the absence of a unified yardstick to evaluate general visual representations hinders progress. Many sub-fields promise representations, but each has different evaluation protocols that are either too constrained (linear classification), limited in scope (ImageNet, CIFAR, Pascal-VOC), or only loosely related to representation quality (generation). We present the Visual Task Adaptation Benchmark (VT AB): a diverse, realistic, and challenging benchmark to evaluate representations. VT AB embodies one principle: good representations adapt to unseen tasks with few examples . We run a large VT AB study of popular algorithms, answering questions like: How effective are ImageNet representation on nonstandard datasets? Is self-supervision useful if one already has labels? Deep learning has revolutionized computer vision. Distributed representations learned from ...
Oct-1-2019
- Genre:
- Research Report (0.82)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (0.46)
- Therapeutic Area (0.47)
- Health & Medicine
- Technology: