Supplementary Material A Proof of Theorem 3.1 (Realizable Case - Positive Result) Theorem (Restatement of Theorem 3.1)
–Neural Information Processing Systems
Let H be a hypothesis class with VC dimension d and let 2 (0, 1) . Then there exists a learner Lrn having -adversarial risk " To prove Theorem 3.1, we will use the S SPV and let n 1 / be the sample size. By applying linearity of expectation we get E " 1 t To prove Theorem 3.1, we will need an optimal learner as an input learner for SPV . Theorem 3.1 can now be immediately inferred as a direct application of Lemma A.1 and Theorem A.2 . The impossibility result in Theorem 3.3 extends to randomized learning rules.
Neural Information Processing Systems
Aug-18-2025, 20:10:17 GMT
- Technology: