Efficient and Adaptive Posterior Sampling Algorithms for Bandits
Hu, Bingshan, Huang, Zhiming, Zhang, Tianyue H., Lécuyer, Mathias, Hegde, Nidhi
We study Thompson Sampling-based algorithms for stochastic bandits with bounded rewards. As the existing problem-dependent regret bound for Thompson Sampling with Gaussian priors [Agrawal and Goyal, 2017] is vacuous when $T \le 288 e^{64}$, we derive a more practical bound that tightens the coefficient of the leading term %from $288 e^{64}$ to $1270$. Additionally, motivated by large-scale real-world applications that require scalability, adaptive computational resource allocation, and a balance in utility and computation, we propose two parameterized Thompson Sampling-based algorithms: Thompson Sampling with Model Aggregation (TS-MA-$\alpha$) and Thompson Sampling with Timestamp Duelling (TS-TD-$\alpha$), where $\alpha \in [0,1]$ controls the trade-off between utility and computation. Both algorithms achieve $O \left(K\ln^{\alpha+1}(T)/\Delta \right)$ regret bound, where $K$ is the number of arms, $T$ is the finite learning horizon, and $\Delta$ denotes the single round performance loss when pulling a sub-optimal arm.
May-2-2024
- Country:
- North America > Canada
- Alberta (0.28)
- British Columbia (0.28)
- North America > Canada
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Health & Medicine (0.46)
- Technology: