We are glad that R1 thinks that our framework is natural

Neural Information Processing Systems 

"The framework is natural but straight-forward" This shows that our general framework is effective and doesn't lead to vacuous bounds. "Perhaps to have more focus on cases when this will not work" However, accounting for this is out of scope of this work. "Covering numbers are not very effective in practice" In particular, the mentioned paper "Uniform convergence may be unable to explain generalization in deep learning" These results suggest considering a shifted view: "Uniform Convergence strikes back and can explain This is an interesting open question and we leave it as future work. "Experiments on real world data would have helped" We will move some experiments on real data to the main body in a future version of our paper. We will make this explicitly clear in in a future version of our paper.