Provable Partially Observable Reinforcement Learning with Privileged Information

Neural Information Processing Systems 

Partial observability of the underlying states generally presents significant challenges for reinforcement learning (RL). In practice, certain privileged information, e.g., the access to states from simulators, has been exploited in training and has achieved prominent empirical successes. To better understand the benefits of privileged information, we revisit and examine several simple and practically used paradigms in this setting. Specifically, we first formalize the empirical paradigm of expert distillation (also known as teacher-student learning), demonstrating its pitfall in finding near-optimal policies. We then identify a condition of the partially observable environment, the deterministic filter condition, under which expert distillation achieves sample and computational complexities that are both polynomial. Furthermore, we investigate another successful empirical paradigm of asymmetric actor-critic, and focus on the more challenging setting of observable partially observable Markov decision processes. We develop a belief-weighted asymmetric actor-critic algorithm with polynomial sample and quasi-polynomial computational complexities, in which one key component is a new provable oracle for learning belief states that preserves filter stability under a misspecified model, which may be of independent interest. Finally, we also investigate the provable efficiency of partially observable multi-agent RL (MARL) with privileged information.