Goto

Collaborating Authors

 Fan, Ting-Han


A Contraction Approach to Model-based Reinforcement Learning

arXiv.org Artificial Intelligence

Model-based Reinforcement Learning has shown considerable experimental success. However, a theoretical understanding of it is still lacking. To this end, we analyze the error in cumulative reward for both stochastic and deterministic transitions using a contraction approach. We show that this approach doesn't require strong assumptions and can recover the typical quadratic error to the horizon. We prove that branched rollouts can reduce this error and are essential for deterministic transitions to have a Bellman contraction. Our results also apply to Imitation Learning, where we prove that GAN-type learning is better than Behavioral Cloning in continuous state and action spaces.


Model Imitation for Model-Based Reinforcement Learning

arXiv.org Machine Learning

Model-based reinforcement learning (MBRL) aims to learn a dynamic model to reduce the number of interactions with real-world environments. However, due to estimation error, rollouts in the learned model, especially those of long horizon, fail to match the ones in real-world environments. This mismatching has seriously impacted the sample complexity of MBRL. The phenomenon can be attributed to the fact that previous works employ supervised learning to learn the one-step transition models, which has inherent difficulty ensuring the matching of distributions from multi-step rollouts. Based on the claim, we propose to learn the synthesized model by matching the distributions of multi-step rollouts sampled from the synthesized model and the real ones via WGAN. We theoretically show that matching the two can minimize the difference of cumulative rewards between the real transition and the learned one. Our experiments also show that the proposed model imitation method outperforms the state-of-the-art in terms of sample complexity and average return.