Goto

Collaborating Authors

 hip-mdp


Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes

Neural Information Processing Systems

We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings.





Multi-Task Reinforcement Learning as a Hidden-Parameter Block MDP

Zhang, Amy, Sodhani, Shagun, Khetarpal, Khimya, Pineau, Joelle

arXiv.org Artificial Intelligence

Multi-task reinforcement learning is a rich paradigm where information from previously seen environments can be leveraged for better performance and improved sample-efficiency in new environments. In this work, we leverage ideas of common structure underlying a family of Markov decision processes (MDPs) to improve performance in the few-shot regime. We use assumptions of structure from Hidden-Parameter MDPs and Block MDPs to propose a new framework, HiP-BMDP, and approach for learning a common representation and universal dynamics model. To this end, we provide transfer and generalization bounds based on task and state similarity, along with sample complexity bounds that depend on the aggregate number of samples across tasks, rather than the number of tasks, a significant improvement over prior work. To demonstrate the efficacy of the proposed method, we empirically compare and show improvements against other multi-task and meta-reinforcement learning baselines.


Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes

Killian, Taylor W., Daulton, Samuel, Konidaris, George, Doshi-Velez, Finale

Neural Information Processing Systems

We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings. We also replace the original Gaussian Process-based model with a Bayesian Neural Network, enabling more scalable inference. Thus, we expand the scope of the HiP-MDP to applications with higher dimensions and more complex dynamics. Papers published at the Neural Information Processing Systems Conference.


Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes

Killian, Taylor W., Daulton, Samuel, Konidaris, George, Doshi-Velez, Finale

Neural Information Processing Systems

We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings. Our new framework correctly models the joint uncertainty in the latent parameters and the state space. We also replace the original Gaussian Process-based model with a Bayesian Neural Network, enabling more scalable inference. Thus, we expand the scope of the HiP-MDP to applications with higher dimensions and more complex dynamics.


Robust and Efficient Transfer Learning with Hidden-Parameter Markov Decision Processes

Killian, Taylor, Daulton, Samuel, Konidaris, George, Doshi-Velez, Finale

arXiv.org Machine Learning

We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings. Our new framework correctly models the joint uncertainty in the latent parameters and the state space. We also replace the original Gaussian Process-based model with a Bayesian Neural Network, enabling more scalable inference. Thus, we expand the scope of the HiP-MDP to applications with higher dimensions and more complex dynamics.


Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes

Killian, Taylor W. (Harvard University) | Konidaris, George (Brown University) | Doshi-Velez, Finale (Harvard University)

AAAI Conferences

An intriguing application of transfer learning emerges when tasks arise with similar, but not identical, dynamics. Hidden Parameter Markov Decision Processes (HiP-MDP) embed these tasks into a low-dimensional space; given the embedding parameters one can identify the MDP for a particular task. However, the original formulation of HiP-MDP had a critical flaw: the embedding uncertainty was modeled independently of the agent's state uncertainty, requiring an arduous training procedure. In this work, we apply a Gaussian Process latent variable model to jointly model the dynamics and the embedding, leading to a more elegant formulation, one that allows for better uncertainty quantification and thus more robust transfer.


Transfer Learning Across Patient Variations with Hidden Parameter Markov Decision Processes

Killian, Taylor, Konidaris, George, Doshi-Velez, Finale

arXiv.org Machine Learning

Due to physiological variation, patients diagnosed with the same condition may exhibit divergent, but related, responses to the same treatments. Hidden Parameter Markov Decision Processes (HiP-MDPs) tackle this transfer-learning problem by embedding these tasks into a low-dimensional space. However, the original formulation of HiP-MDP had a critical flaw: the embedding uncertainty was modelled independently of the agent's state uncertainty, requiring an unnatural training procedure in which all tasks visited every part of the state space--possible for robots that can be moved to a particular location, impossible for human patients. We update the HiP-MDP framework and extend it to more robustly develop personalized medicine strategies for HIV treatment.