Goto

Collaborating Authors

 miniimagenet



ASelf Supervised Learning Methods

Neural Information Processing Systems

Weusedtheentireimagesthatthe CUBdataset has (train, val, andtest). For example, onthe CUBdataset, theperformancegain (fork =5) is 0.249, 1.035, and 2.276for miniImageNet, tieredImageNet, and ImageNet, respectively.






General Response

Neural Information Processing Systems

We thank all the reviewers for their insightful and encouraging comments. Per your suggestion, we will update the appendix by adding more explanations about the proof ideas. Similarly, we can extend convergence results in Theorem 4 in Appendix from FS to IFS. We expect that the approach developed in this paper will fuel this future investigation. We will update this into the revision.



AlleviatingtheSampleSelectionBiasinFew-shot LearningbyRemovingProjectiontotheCentroid

Neural Information Processing Systems

While agood feature extractor may help cluster unseen data, thetask distribution shift between training andtesting [25] still makes it hard to estimate novel class distribution using a small number of samples from the support set. Thus, the performance is strongly correlated with the sample quality of the support data.