Span-Based Optimal Sample Complexity for Weakly Communicating and General Average Reward MDPs

Neural Information Processing Systems 

We study the sample complexity of learning an \varepsilon -optimal policy in an average-reward Markov decision process (MDP) under a generative model. For weakly communicating MDPs, we establish the complexity bound \widetilde{O}\left(SA\frac{\mathsf{H}}{\varepsilon 2} \right), where \mathsf{H} is the span of the bias function of the optimal policy and SA is the cardinality of the state-action space. Our result is the first that is minimax optimal (up to log factors) in all parameters S,A,\mathsf{H}, and \varepsilon, improving on existing work that either assumes uniformly bounded mixing times for all policies or has suboptimal dependence on the parameters. We also initiate the study of sample complexity in general (multichain) average-reward MDPs. Both results are based on reducing the average-reward MDP to a discounted MDP, which requires new ideas in the general setting.