Robust Few-Shot Learning with Adversarially Queried Meta-Learners

Goldblum, Micah, Fowl, Liam, Goldstein, Tom

arXiv.org Machine Learning 

On the other hand, few-shot learning methods are highly vulnerable to adversarial exam ples. The goal of our work is to produce networks which both perform well at few-sh ot tasks and are simultaneously robust to adversarial examples. W e adapt ad versarial training for meta-learning, we adapt robust architectural features to s mall networks for meta-learning, we test pre-processing defenses as an alternativ e to adversarial training for meta-learning, and we investigate the advantages of rob ust meta-learning over robust transfer-learning for few-shot tasks. This work pro vides a thorough analysis of adversarially robust methods in the context of meta-lear ning, and we lay the foundation for future work on defenses for few-shot tasks. Conventional adversarial train ing and pre-processing defenses aim to produce networks that resist attack (Madry et al., 2017; Zhang e t al., 2019; Samangouei et al., 2018), but such defenses rely heavily on the availability of large t raining datasets. In applications that require few-shot learning, such as face recognition from few images, recognition of a v ideo source from a single clip, or recognition of a new object from few exa mple photos, the conventional robust training pipeline breaks down.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found