Review for NeurIPS paper: Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree

Neural Information Processing Systems 

Weaknesses: First of all I should say that I like this paper. The following should be taken more like'issues in need of clarification' or'things a reader might be confused about' rather than'weaknesses'. As far as I can tell, Thm 2 and Prop 4 don't resolve this, and I find the experimental evidence difficult to interpret (more on that below). In particular, I'd be curious if it's possible to get rid of the constant term on the RHS of (9)? First, I'd expect the risk curves (of both l1 and l2 minimisers) to be decreasing in p, isn't that what this is all about? Second, it is claimed in Section 3(i), that the risk of l1-minimisers is unaffected by the norm of beta, but there is a clear difference between the green and the orange curve (BP, beta norm 1 or 0.1).