advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch
Ding, Gavin Weiguang, Wang, Luyu, Jin, Xiaomeng
Machine learning models are vulnerable to "adversarial" perturbations (Szegedy et al., 2013; Biggio et al., 2013). They are adversarial in the sense that, after these artificially constructed perturbations are added to on the inputs of the model, human observers do not change their perception, but the predictions ofa model could be manipulated.
Feb-20-2019