Reviews: Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers

Neural Information Processing Systems 

I thank the authors for a very well written paper with clear motivation, results and substantiation. Originality: This paper extends the work of Cohen et al. by considering smoothing distributions that can provide defenses to l0 attacks. This seems like a natural next step given the work of Cohen et al., The focus on structured classifiers (decision trees in this case) is novel. Decision trees have a natural structure which lends itself to tighter certificates for l0 attacks. The paper also provides some tricks to handle numerical issues when applying their approach to larger datasets like Imagenet.