Position-based Multiple-play Bandit Problem with Unknown Position Bias

Komiyama, Junpei, Honda, Junya, Takeda, Akiko

Neural Information Processing Systems 

Motivated by online advertising, we study a multiple-play multi-armed bandit problem with position bias that involves several slots and the latter slots yield fewer rewards. We characterize the hardness of the problem by deriving an asymptotic regret bound. We propose the Permutation Minimum Empirical Divergence (PMED) algorithm and derive its asymptotically optimal regret bound. Because of the uncertainty of the position bias, the optimal algorithm for such a problem requires non-convex optimizations that are different from usual partial monitoring and semi-bandit problems. We propose a cutting-plane method and related bi-convex relaxation for these optimizations by using auxiliary variables.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found