Supplementary Material A Proof of Theorem 3.1 (Realizable Case - Positive Result) (Lrn|D,) apple c

Neural Information Processing Systems 

Let H be a hypothesis class with VC dimension d and let 2 (0, 1). Then there exists a learner Lrn having -adversarial risk " To prove Theorem 3.1, we will use the S First, we prove a more general result on the performance of our SPV meta learner. We denote the algorithm obtained by executing SPV with a learner Lrn as the input algorithm by SPV(Lrn). Let H be a concept class, D be a distribution over examples, and Lrn be a learning rule. Let also 2 (0, 1) be the stability parameter given to SPV and let n 1/ be the sample size. By applying linearity of expectation we get " # X To prove Theorem 3.1, we will need an optimal learner as an input learner for SPV.