Review for NeurIPS paper: Learning Bounds for Risk-sensitive Learning

Neural Information Processing Systems 

This is a learning theory paper in situation where the usual mean loss objective function is replaced by a risk-sensitive objective with different weights attributed to data depending on the loss. This setting is of high importance in robust learning, where only a fraction of the sample with smallest losses is considered. This paper provides an analysis of this setting via Rademacher bounds. The paper suggests a connection to Sample-Variance-Penalization (SVP) and concludes with some experimental results. The appendix also contains robustness analysis.