Sharper Generalization Bounds for Pairwise Learning: Supplementary Material A Proof of Theorem 1
–Neural Information Processing Systems
To prove Theorem 1, we need to introduce some lemmas. With these lemmas, we can give the proof of Theorem 1 on high-probability bounds of the generalization gap. The concentration inequality established in Lemma A.1 applies to a summation of According to Lemma A.3, we know null null null null(A ( S); Z, Z) E Therefore, all the assumptions of Lemma A.1 hold for the random functions Lemma A.1 to derive null null null null We first prove Lemma 2 on the norm of output model. We can plug the above inequality back into (B.1) to derive σ 2 Enull null A (S) w To prove Theorem 3, we introduce some lemmas. Assume for all z, z we have (4.3) .
Neural Information Processing Systems
Aug-17-2025, 06:28:57 GMT
- Country:
- Europe > United Kingdom
- England
- Cambridgeshire > Cambridge (0.04)
- Oxfordshire > Oxford (0.04)
- England
- North America > Canada (0.04)
- Europe > United Kingdom
- Technology: