Locally-AdaptiveNonparametricOnlineLearning: SupplementaryMaterial

Neural Information Processing Systems 

In case of generic convex losses, we use the more complex parameterless algorithm AdaNormalHedge. The following theorem states a slightly more general bound that holds for anyη-exp-concave loss function (for completeness,theproofisgiveninAppendixD). Nownotethatalthough the algorithm is actually initialized withw1,i = 1, Lemma 1 shows that the regret remains the same if we assume the algorithm is initialized withwE1. Suppose that Algorithm 5 is run using predictions and updates provided by AdaNormalHedge. Asinourlocally-adaptive setting node experts are local learners,byi,t should be viewed as the prediction of the local online learning algorithm sitting at nodeiof the tree.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found