Goto

Collaborating Authors

 rebuffietal


DISCO: AdversarialDefensewith LocalImplicitFunctions

Neural Information Processing Systems

In this section, we ablate the kernel size used to train DISCO on ImageNet. TableIshowsthats=3 achieves the best performance, which degrades fors = 5 by a significant margin (3.26%). This is consistent with the well known complexity of synthesizing images withglobalmodels, suchasGANs. For a single ImageNet image of size 224, STL requires 23.71 seconds while DISCO (K=1) only requires0.027. In this section, we list the url links that are used for training and evaluating DISCO.