Revisiting Weighted Strategy for Non-stationary Parametric Bandits and MDPs
Wang, Jing, Zhao, Peng, Zhou, Zhi-Hua
Abstract--Non-stationary parametric bandits have attracted much attention recently. There are three principled ways to deal with non-stationarity, including sliding-window, weighted, and restart strategies. As many non-stationary environments exhibit gradual drifting patterns, the weighted strategy is commonly adopted in real-world applications. However, previous theoretical studies show that its analysis is more involved and the algorithms are either computationally less efficient or statistically subopti-mal. This paper revisits the weighted strategy for non-stationary parametric bandits. In linear bandits (LB), we discover that this undesirable feature is due to an inadequate regret analysis, which results in an overly complex algorithm design. We propose a refined analysis framework, which simplifies the derivation and, importantly, produces a simpler weight-based algorithm that is as efficient as window/restart-based algorithms while retaining the same regret as previous studies. Furthermore, our new framework can be used to improve regret bounds of other parametric bandits, including Generalized Linear Bandits (GLB) and Self-Concordant Bandits (SCB). Moreover, we extend our framework to non-stationary Markov Decision Processes (MDPs) with function approximation, focusing on Linear Mixture MDP and Multinomial Logit (MNL) Mixture MDP . For both classes, we propose algorithms based on the weighted strategy and establish dynamic regret guarantees using our analysis framework. Index T erms--dynamic regret, non-stationary bandits, discounted factor, online MDPs, function approximation. ON-ST A TIONARY parametric bandits model the sequential decision-making problems where the reward distributions of each arm are structured with an unknown time-varying parameter, which have been extensively studied in recent years [1]-[11] due to their significance in many real-world non-stationary online applications such as recommendation systems [12], [13].
Jan-6-2026
- Country:
- Asia
- China > Jiangsu Province
- Nanjing (0.04)
- Middle East > Jordan (0.04)
- China > Jiangsu Province
- Asia
- Genre:
- Research Report (0.50)
- Workflow (0.46)