advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch

Ding, Gavin Weiguang, Wang, Luyu, Jin, Xiaomeng

arXiv.org Machine Learning 

Machine learning models are vulnerable to "adversarial" perturbations (Szegedy et al., 2013; Biggio et al., 2013). They are adversarial in the sense that, after these artificially constructed perturbations are added to on the inputs of the model, human observers do not change their perception, but the predictions ofa model could be manipulated.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found