Improving Sample Complexity Bounds for (Natural) Actor-Critic Algorithms
–Neural Information Processing Systems
The actor-critic (AC) algorithm is a popular method to find an optimal policy in reinforcement learning. In the infinite horizon scenario, the finite-sample convergence rate for the AC and natural actor-critic (NAC) algorithms has been established recently, but under independent and identically distributed (i.i.d.) sampling and single-sample update at each iteration. In contrast, this paper characterizes the convergence rate and sample complexity of AC and NAC under Markovian sampling, with mini-batch data for each iteration, and with actor having general policy class approximation. We show that the overall sample complexity for a mini-batch AC to attain an \epsilon -accurate stationary point improves the best known sample complexity of AC by an order of \mathcal{O}(\epsilon {-1}\log(1/\epsilon)), and the overall sample complexity for a mini-batch NAC to attain an \epsilon -accurate globally optimal point improves the existing sample complexity of NAC by an order of \mathcal{O}(\epsilon {-2}/\log(1/\epsilon)) . Moreover, the sample complexity of AC and NAC characterized in this work outperforms that of policy gradient (PG) and natural policy gradient (NPG) by a factor of \mathcal{O}((1-\gamma) {-3}) and \mathcal{O}((1-\gamma) {-4}\epsilon {-2}/\log(1/\epsilon)), respectively.
Neural Information Processing Systems
Oct-9-2024, 21:00:10 GMT
- Technology: