Chi-Jen Lu
Online Reinforcement Learning in Stochastic Games
Chen-Yu Wei, Yi-Te Hong, Chi-Jen Lu
We study online reinforcement learning in average-reward stochastic games (SGs). An SG models a two-player zero-sum game in a Markov environment, where state transitions and one-step payoffs are determined simultaneously by a learner and an adversary. We propose the UCSG algorithm that achieves a sublinear regret compared to the game value when competing with an arbitrary opponent. This result improves previous ones under the same setting. The regret bound has a dependency on the diameter, which is an intrinsic value related to the mixing property of SGs. If we let the opponent play an optimistic best response to the learner, UCSG finds an ε-maximin stationary policy with a sample complexity of Õ (poly(1/ε)), where ε is the gap to the best policy.
Tracking the Best Expert in Non-stationary Stochastic Environments
Chen-Yu Wei, Yi-Te Hong, Chi-Jen Lu
We study the dynamic regret of multi-armed bandit and experts problem in nonstationary stochastic environments. We introduce a new parameter Λ, which measures the total statistical variance of the loss distributions over T rounds of the process, and study how this amount affects the regret. We investigate the interaction between Λ and Γ, which counts the number of times the distributions change, as well as Λ and V, which measures how far the distributions deviates over time. One striking result we find is that even when Γ, V, and Λ are all restricted to constant, the regret lower bound in the bandit setting still grows with T. The other highlight is that in the full-information setting, a constant regret becomes achievable with constant Γ and Λ, as it can be made independent of T, while with constant V and Λ, the regret still has a T
Online Reinforcement Learning in Stochastic Games
Chen-Yu Wei, Yi-Te Hong, Chi-Jen Lu
We study online reinforcement learning in average-reward stochastic games (SGs). An SG models a two-player zero-sum game in a Markov environment, where state transitions and one-step payoffs are determined simultaneously by a learner and an adversary. We propose the UCSG algorithm that achieves a sublinear regret compared to the game value when competing with an arbitrary opponent. This result improves previous ones under the same setting. The regret bound has a dependency on the diameter, which is an intrinsic value related to the mixing property of SGs. If we let the opponent play an optimistic best response to the learner, UCSG finds an ε-maximin stationary policy with a sample complexity of Õ (poly(1/ε)), where ε is the gap to the best policy.