Goto

Collaborating Authors

 loss null


Constant Regret, Generalized Mixability, and Mirror Descent

Zakaria Mhammedi, Robert C. Williamson

Neural Information Processing Systems

Under this setting, and for the right choice of loss function and "mixing" algorithm, it is possible for the learner to achieve a constant regret regardless of the number of prediction rounds.


Constant Regret, Generalized Mixability, and Mirror Descent

Zakaria Mhammedi, Robert C. Williamson

Neural Information Processing Systems

Under this setting, and for the right choice of loss function and "mixing" algorithm, it is possible for the learner to achieve a constant regret regardless of the number of prediction rounds.







To Reviewer

Neural Information Processing Systems

It seems you misunderstood some key points and details. Hope our explanation below could help to clarify some misunderstandings and confusion. By "specific learning rate schedule", we think We think the empirical evidence is sufficient to verify our theoretical claims. This is exactly the case here. Figure 1(b) in [Triantafillou et al. 2020] shows that the increase of shots For your other comments: 1) The inner-task gap vanishes because the expectation of the loss function w.r.t.



To Reviewer

Neural Information Processing Systems

It seems you misunderstood some key points and details. Hope our explanation below could help to clarify some misunderstandings and confusion. By "specific learning rate schedule", we think We think the empirical evidence is sufficient to verify our theoretical claims. This is exactly the case here. Figure 1(b) in [Triantafillou et al. 2020] shows that the increase of shots For your other comments: 1) The inner-task gap vanishes because the expectation of the loss function w.r.t.