A Experimental Setup in Detail

Neural Information Processing Systems 

We implement our attack framework using Python 3.7.3 and PyTorch 1.7.1 For all our attacks in 4.1, 4.2, 4.3, and 4.5, we use symmetric quantization for In 4.4 where we examine the transferability of our attacks, we use the same Banner et al., 2019] while re-training clean models. Prior work showed that a model, less sensitive to the perturbations to its parameters or activation, will have less accuracy degradation after quantization. Alizadeh et al. [2020] look into the decision boundary of a model to examine In Eqn 2, we use label-smoothing to reduce the confidence of a model's prediction Clean is a pre-trained model. Table 6 shows our results. We experiment with an AlexNet model trained on CIFAR10.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found