Goto

Collaborating Authors

 reinforcement learning



Fast Bellman Updates for Wasserstein Distributionally Robust MDPs

Neural Information Processing Systems

Markov decision processes (MDPs) often suffer from the sensitivity issue under model ambiguity. In recent years, robust MDPs have emerged as an effective framework to overcome this challenge. Distributionally robust MDPs extend the robust MDP framework by incorporating distributional information of the uncertain model parameters to alleviate the conservative nature of robust MDPs.








Context Shift Reduction for Offline Meta-Reinforcement Learning Y unkai Gao

Neural Information Processing Systems

Offline meta-reinforcement learning (OMRL) utilizes pre-collected offline datasets to enhance the agent's generalization ability on unseen tasks.


The Curious Price of Distributional Robustness in Reinforcement Learning with a Generative Model Laixi Shi Caltech Gen Li

Neural Information Processing Systems

In this paper, we are particularly interested in understanding whether, and how, the choice of distributional robustness bears statistical implications in learning the desired policy, by studying the sample complexity in the widely-used generative model (Kearns and Singh, 1999).