Review for NeurIPS paper: Learning Bounds for Risk-sensitive Learning
–Neural Information Processing Systems
Weaknesses: I have a handful of minor concerns. Exploring inverted OCEs would have been interesting too... (2) ... because while the OCE formulation is convex, at least in the loss, for CVaR (and probably entropic risk), the inverted OCEs look like they lead to a non-convex problem. While machine learning has learned to live with non-convexity in the models, some basic experiments could help assuage any concerns. When using complicated neural networks, my understanding is that these bounds are mostly vacuous because the Rademacher complexities are high, hence the battles over rethinking generalization or the shortcomings of uniform convergence. I don't view these issues as meaning that we shouldn't examine these types of theory problems, but I find the suggestion that the empirical terms will simply vanish and this will solve all our problems to be disingenuous.
Neural Information Processing Systems
Feb-11-2025, 23:24:22 GMT
- Technology: