Improving Few-Shot Visual Classification with Unlabelled Examples
Bateni, Peyman, Barber, Jarred, van de Meent, Jan-Willem, Wood, Frank
We propose a transductive meta-learning method that uses unlabelled instances to improve few-shot image classification performance. Our approach combines a regularized Mahalanobis-distance-based soft k-means clustering procedure with a modified state of the art neural adaptive feature extractor to achieve improved test-time classification accuracy using unlabelled data. We evaluate our method on transductive few-shot learning tasks, in which the goal is to jointly predict labels for query (test) examples given a set of support (training) examples. We achieve new state of the art performance on Meta-Dataset, and produce competitive results on mini-and tiered-ImageNet benchmarks. Deep learning has revolutionized visual classification, enabled in part by the development of large and diverse sets of curated training data (Szegedy et al., 2014; He et al., 2015; Krizhevsky et al., 2017; Simonyan & Zisserman, 2014; Sornam et al., 2017). However, in many image classification settings, millions of labelled examples are not available; therefore, techniques that can achieve sufficient classification performance with few labels are required. This has motivated research on few-shot learning (Feyjie et al., 2020; Wang & Yao, 2019; Wang et al., 2019; Bellet et al., 2013), which seeks to develop methods for developing classifiers with much smaller datasets.
Oct-2-2020
- Country:
- Europe > France (0.04)
- North America
- Canada > British Columbia
- United States > Massachusetts
- Middlesex County > Cambridge (0.04)
- Suffolk County > Boston (0.04)
- Genre:
- Research Report (0.65)
- Technology: