Pre-training of Lightweight Vision Transformers on Small Datasets with Minimally Scaled Images
–arXiv.org Artificial Intelligence
Can a lightweight Vision Transformer (ViT) match or exceed the performance of Convolutional Neural Networks (CNNs) like ResNet on small datasets with small image resolutions? This report demonstrates that a pure ViT can indeed achieve superior performance through pre-training, using a masked auto-encoder technique with minimal image scaling. Our experiments on the CIFAR-10 and CIFAR-100 datasets involved ViT models with fewer than 3.65 million parameters and a multiply-accumulate (MAC) count below 0.27G, qualifying them as 'lightweight' models. Unlike previous approaches, our method attains state-of-the-art performance among similar lightweight transformer-based architectures without significantly scaling up images from CIFAR-10 and CIFAR-100. This achievement underscores the efficiency of our model, not only in handling small datasets but also in effectively processing images close to their original scale.
arXiv.org Artificial Intelligence
Feb-6-2024
- Country:
- Africa > Ethiopia (0.04)
- Asia > Singapore (0.04)
- Europe > Netherlands
- North Holland > Amsterdam (0.04)
- North America > Canada
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.68)
- Technology: