A Universal Growth Rate for Learning with Smooth Surrogate Losses
–Neural Information Processing Systems
This paper presents a comprehensive analysis of the growth rate of $H$-consistency bounds (and excess error bounds) for various surrogate losses used in classification. We prove a square-root growth rate near zero for smooth margin-based surrogate losses in binary classification, providing both upper and lower bounds under mild assumptions.
Neural Information Processing Systems
Dec-25-2025, 14:48:15 GMT
- Technology: