Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers Guang-He Lee 1, Y ang Yuan

Neural Information Processing Systems 

Many powerful classifiers lack robustness in the sense that a slight, potentially unnoticeable manipulation of the input features, e.g., by an adversary, can cause the classifier to change its prediction [

Similar Docs  Excel Report  more

TitleSimilaritySource
None found