A Note on KL-UCB+ Policy for the Stochastic Bandit

Honda, Junya

arXiv.org Machine Learning 

A classic setting of the stochastic K-armed bandit problem is considered in this note. In this problem it has been known that KL-UCB policy achieves the asymptotically optimal regret bound and KL-UCB policy empirically performs better than the KL-UCB policy although the regret bound for the original form of the KL-UCB policy has been unknown. This note demonstrates that a simple proof of the asymptotic optimality of the KL-UCB policy can be given by the same technique as those used for analyses of other known policies. In the problem of the stochastic bandit problems, it is known that there exists a (problem-dependent) regret lower bound [1][2]. It can be achieved by, for example, the DMED policy [3] for the model of nonparametric distributions over [0, 1]. One of the conference version [6] of [5] also proposed KL-UCB policy, which empirically performs better than KL-UCB but does not have a theoretical guarantee.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found