Does Feedback Help in Bandits with Arm Erasures?

Karakas, Merve, Hanna, Osama, Yang, Lin F., Fragouli, Christina

arXiv.org Machine Learning 

Does Feedback Help in Bandits with Arm Erasures? Abstract --We study a distributed multi-armed bandit (MAB) problem over arm erasure channels, motivated by the increasing adoption of MAB algorithms over communication-constrained networks. In this setup, the learner communicates the chosen arm to play to an agent over an erasure channel with probability ϵ [0, 1); if an erasure occurs, the agent continues pulling the last successfully received arm; the learner always observes the reward of the arm pulled. In past work, we considered the case where the agent cannot convey feedback to the learner, and thus the learner does not know whether the arm played is the requested or the last successfully received one. In this paper, we instead consider the case where the agent can send feedback to the learner on whether the arm request was received, and thus the learner exactly knows which arm was played. Surprisingly, we prove that erasure feedback does not improve the worst-case regret upper bound order over the previously studied no-feedback setting. In particular, we prove a regret lower bound of Ω( KT + K/ (1 ϵ)), where K is the number of arms and T the time horizon, that matches no-feedback upper bounds up to logarithmic factors. We note however that the availability of feedback does enable to design simpler algorithms that may achieve better constants (albeit not better order) regret bounds; we design one such algorithm, and numerically evaluate its performance. The multi-armed bandit (MAB) framework has emerged as a fundamental model for sequential decision-making under uncertainty, finding applications in areas such as recommendation systems, clinical trials, distributed robotics, and online advertising [1].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found