A Note on KL-UCB+ Policy for the Stochastic Bandit
A classic setting of the stochastic K-armed bandit problem is considered in this note. In this problem it has been known that KL-UCB policy achieves the asymptotically optimal regret bound and KL-UCB policy empirically performs better than the KL-UCB policy although the regret bound for the original form of the KL-UCB policy has been unknown. This note demonstrates that a simple proof of the asymptotic optimality of the KL-UCB policy can be given by the same technique as those used for analyses of other known policies. In the problem of the stochastic bandit problems, it is known that there exists a (problem-dependent) regret lower bound [1][2]. It can be achieved by, for example, the DMED policy [3] for the model of nonparametric distributions over [0, 1]. One of the conference version [6] of [5] also proposed KL-UCB policy, which empirically performs better than KL-UCB but does not have a theoretical guarantee.
Mar-20-2019
- Country:
- Asia > Japan
- Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.05)
- Europe
- Hungary > Budapest
- Budapest (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Hungary > Budapest
- North America > United States
- New York (0.04)
- Asia > Japan
- Genre:
- Research Report (0.40)
- Industry:
- Technology: