Achieving O(1/N) Optimality Gap in Restless Bandits through Diffusion Approximation
Yan, Chen, Wang, Weina, Ying, Lei
–arXiv.org Artificial Intelligence
The Restless Multi-Armed Bandit (RMAB) problem is a fundamental framework in decision theory and operations research, where a decision maker must choose which among multiple tasks (arms) to work on (pull) at each time step in order to maximize cumulative reward [24]. Unlike the classic bandit problem [14], in the restless variant, the state of each arm evolves stochastically regardless of whether it is pulled. This problem has gained significant attention due to its applicability in various domains where optimal decision-making under uncertainty is critical, such as machine maintenance [11], target tracking [17], network communication [18] and clinic trials [22], to name a few. Despite its relevance, the RMAB problem is known to be PSPACE-hard [19], and finding optimal policies is computationally challenging, especially when the number of arms N is large. In this paper, we focus on the finite horizon version of the RMAB problem with N homogeneous arms and horizon H, where each arm follows the same (time-dependent) state transition and reward function. While computing the exact optimal policy is impractical, the homogeneity of the model allows for the design of efficient heuristic policies. One such class of heuristics is based on fluid approximation, which transforms the original N-armed RMAB problem into a Linear Program (LP).
arXiv.org Artificial Intelligence
Oct-19-2024