Continuous-Time Multi-Armed Bandits with Controlled Restarts
Cayci, Semih, Eryilmaz, Atilla, Srikant, R.
Time-constrained decision processes have been ubiquitous in many fundamental applications in physics, biology and computer science. Recently, restart strategies have gained significant attention for boosting the efficiency of time-constrained processes by expediting the completion times. In this work, we investigate the bandit problem with controlled restarts for time-constrained decision processes, and develop provably good learning algorithms. In particular, we consider a bandit setting where each decision takes a random completion time, and yields a random and correlated reward at the end, with unknown values at the time of decision. The goal of the decision-maker is to maximize the expected total reward subject to a time constraint $\tau$. As an additional control, we allow the decision-maker to interrupt an ongoing task and forgo its reward for a potentially more rewarding alternative. For this problem, we develop efficient online learning algorithms with $O(\log(\tau))$ and $O(\sqrt{\tau\log(\tau)})$ regret in a finite and continuous action space of restart strategies, respectively. We demonstrate an applicability of our algorithm by using it to boost the performance of SAT solvers.
Jun-30-2020
- Country:
- Asia > India (0.04)
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States
- Illinois > Champaign County
- Urbana (0.04)
- Nevada > Clark County
- Las Vegas (0.04)
- Ohio > Franklin County
- Columbus (0.04)
- Illinois > Champaign County
- Genre:
- Research Report (0.50)
- Industry:
- Education (0.49)
- Technology: