Goto

Collaborating Authors

 Jing, Mingxuan


When to Update Your Model: Constrained Model-based Reinforcement Learning

arXiv.org Artificial Intelligence

Designing and analyzing model-based RL (MBRL) algorithms with guaranteed monotonic improvement has been challenging, mainly due to the interdependence between policy optimization and model learning. Existing discrepancy bounds generally ignore the impacts of model shifts, and their corresponding algorithms are prone to degrade performance by drastic model updating. In this work, we first propose a novel and general theoretical scheme for a non-decreasing performance guarantee of MBRL. Our follow-up derived bounds reveal the relationship between model shifts and performance improvement. These discoveries encourage us to formulate a constrained lower-bound optimization problem to permit the monotonicity of MBRL. A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns. Motivated by these analyses, we design a simple but effective algorithm CMLO (Constrained Model-shift Lower-bound Optimization), by introducing an event-triggered mechanism that flexibly determines when to update the model. Experiments show that CMLO surpasses other state-of-the-art methods and produces a boost when various policy optimization methods are employed.


Learning and Inferring Movement with Deep Generative Model

arXiv.org Machine Learning

Learning and inference movement is a very challenging problem due to its high dimensionality and dependency to varied environments or tasks. In this paper, we propose an effective probabilistic method for learning and inference of basic movements. The motion planning problem is formulated as learning on a directed graphic model and deep generative model is used to perform learning and inference from demonstrations. An important characteristic of this method is that it flexibly incorporates the task descriptors and context information for long-term planning and it can be combined with dynamic systems for robot control. The experimental validations on robotic approaching path planning tasks show the advantages over the base methods with limited training data.


Adversarial Task Transfer from Preference

arXiv.org Machine Learning

Task transfer is extremely important for reinforcement learning, since it provides possibility for generalizing to new tasks. One main goal of task transfer in reinforcement learning is to transfer the action policy of an agent from the original basic task to specific target task. Existing work to address this challenging problem usually requires accurate hand-coded cost functions or rich demonstrations on the target task. This strong requirement is difficult, if not impossible, to be satisfied in many practical scenarios. In this work, we develop a novel task transfer framework which effectively performs the policy transfer using preference only. The hidden cost model for preference and adversarial training are elegantly combined to perform the task transfer. We give the theoretical analysis on the convergence about the proposed algorithm, and perform extensive simulations on some well-known examples to validate the theoretical results.