Online Reinforcement Learning in Stochastic Games
Wei, Chen-Yu, Hong, Yi-Te, Lu, Chi-Jen
–Neural Information Processing Systems
We study online reinforcement learning in average-reward stochastic games (SGs). An SG models a two-player zero-sum game in a Markov environment, where state transitions and one-step payoffs are determined simultaneously by a learner and an adversary. We propose the \textsc{UCSG} algorithm that achieves a sublinear regret compared to the game value when competing with an arbitrary opponent. This result improves previous ones under the same setting. The regret bound has a dependency on the \textit{diameter}, which is an intrinsic value related to the mixing property of SGs. Slightly extended, \textsc{UCSG} finds an $\varepsilon$-maximin stationary policy with a sample complexity of $\tilde{\mathcal{O}}\left(\text{poly}(1/\varepsilon)\right)$, where $\varepsilon$ is the error parameter. To the best of our knowledge, this extended result is the first in the average-reward setting. In the analysis, we develop Markov chain's perturbation bounds for mean first passage times and techniques to deal with non-stationary opponents, which may be of interest in their own right.
Neural Information Processing Systems
Dec-31-2017
- Country:
- Europe > United Kingdom
- England (0.14)
- North America > United States (0.28)
- Europe > United Kingdom
- Genre:
- Instructional Material > Online (0.60)
- Industry:
- Leisure & Entertainment > Games (0.93)