Goto

Collaborating Authors

 ftrl




Adversarial

Neural Information Processing Systems

Quantile (and, more generally, KL) regret bounds, such as those achieved by NormalHedge (Chaudhuri, Freund, and Hsu 2009) and its variants, relax the goal of competing against the best individual expert to only competing against a majority of experts on adversarial data.







A Appendix Organization

Neural Information Processing Systems

This appendix is organized as follows: in Section B, C and D we provide the missing proofs of Theorems 3, 4 and 5. In Section E we provide detailed version of Theorems 6 and 7 containing all constants. In Section F we provide a version of Theorem 2 with all constants for completeness. In this section we provide the missing proof of Theorem 3, restated below: Lemma 3. Let W be a real vector space and So now it remains only to show the regret bound. Now we recall the following consequence of concavity of the square root function (see Auer et al. [2002], Duchi et al. [2010] for proofs): for any sequence non-negative numbers x In this section we provide the missing proof of Theorem 4, restated below: Lemma 4. Now it remains to use the regret bound on A. Observe that |s In this section, we provide the missing proof of Theorem 5, restated below: Theorem 5.