Goto

Collaborating Authors

 Zhu, Xiaohan


Quantifying Overfitting along the Regularization Path for Two-Part-Code MDL in Supervised Classification

arXiv.org Machine Learning

We provide a complete characterization of the entire regularization curve of a modified two-part-code Minimum Description Length (MDL) learning rule for binary classification, based on an arbitrary prior or description language. Grunwald and Langford [2004] previously established the lack of asymptotic consistency, from an agnostic PAC (frequentist worst case) perspective, of the MDL rule with a penalty parameter of $\lambda=1$, suggesting that it underegularizes. Driven by interest in understanding how benign or catastrophic under-regularization and overfitting might be, we obtain a precise quantitative description of the worst case limiting error as a function of the regularization parameter $\lambda$ and noise level (or approximation error), significantly tightening the analysis of Grunwald and Langford for $\lambda=1$ and extending it to all other choices of $\lambda$.


Tight Bounds on the Binomial CDF, and the Minimum of i.i.d Binomials, in terms of KL-Divergence

arXiv.org Machine Learning

We provide finite sample upper and lower bounds on the Binomial tail probability which are a direct application of Sanov's theorem. We then use these to obtain high probability upper and lower bounds on the minimum of i.i.d. Both bounds are finite sample, asymptotically tight, and expressed in terms of the KL-divergence. The purpose of this note is to provide, in a self-contained and concise way, both upper and lower bounds on the Binomial tail, and through that, on the minimum of i.i.d. The upper bound on the minimum of i.i.d.