Reviews: Robustness of classifiers: from adversarial to random noise

Neural Information Processing Systems 

This paper offers a thorough analysis of the effect of both worse-case (adversarial) and random noise in machine learning classifiers. It derives bounds that precisely describe the robustness of classifiers in function of the curvature of the decision boundary. This leads to some surprisingly (at least to me) general conclusions: * For random noise, the robustness of classifiers behaves as sqrt(d) times the distance from the datapoint to the classification boundary (where d denotes the dimension of the data) provided the curvature of the decision boundary is sufficiently small. This corroborates the intuition that random noise is less of an issue for high-dimensional data. On the other hand, how do we know the curvature of decision boundaries for general classifiers?