hidden parameter markov decision process
Reviews: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes
Summary: This paper presents a new transfer learning approach using Bayesian Neural Network in MDPs. They are building on the existing framework of Hidden Parameter MDPs, and replace the Gaussian process with BNNs, thereby also modeling the joint uncertainty in the latent weights and the state space. Overall, this proposed approach is sound, well developed and seems to help scale the inference. The authors have also shown that it works well by applying it to multiple domains. The paper is extremely well written.
Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes
Killian, Taylor W., Daulton, Samuel, Konidaris, George, Doshi-Velez, Finale
We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings. We also replace the original Gaussian Process-based model with a Bayesian Neural Network, enabling more scalable inference. Thus, we expand the scope of the HiP-MDP to applications with higher dimensions and more complex dynamics. Papers published at the Neural Information Processing Systems Conference.
Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes
Killian, Taylor W. (Harvard University) | Konidaris, George (Brown University) | Doshi-Velez, Finale (Harvard University)
An intriguing application of transfer learning emerges when tasks arise with similar, but not identical, dynamics. Hidden Parameter Markov Decision Processes (HiP-MDP) embed these tasks into a low-dimensional space; given the embedding parameters one can identify the MDP for a particular task. However, the original formulation of HiP-MDP had a critical flaw: the embedding uncertainty was modeled independently of the agent's state uncertainty, requiring an arduous training procedure. In this work, we apply a Gaussian Process latent variable model to jointly model the dynamics and the embedding, leading to a more elegant formulation, one that allows for better uncertainty quantification and thus more robust transfer.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > United States > Rhode Island > Providence County > Providence (0.05)
Transfer Learning Across Patient Variations with Hidden Parameter Markov Decision Processes
Killian, Taylor, Konidaris, George, Doshi-Velez, Finale
Due to physiological variation, patients diagnosed with the same condition may exhibit divergent, but related, responses to the same treatments. Hidden Parameter Markov Decision Processes (HiP-MDPs) tackle this transfer-learning problem by embedding these tasks into a low-dimensional space. However, the original formulation of HiP-MDP had a critical flaw: the embedding uncertainty was modelled independently of the agent's state uncertainty, requiring an unnatural training procedure in which all tasks visited every part of the state space--possible for robots that can be moved to a particular location, impossible for human patients. We update the HiP-MDP framework and extend it to more robustly develop personalized medicine strategies for HIV treatment.
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology > HIV (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.73)
- Information Technology > Artificial Intelligence > Machine Learning > Transfer Learning (0.63)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.62)
Hidden Parameter Markov Decision Processes: An Emerging Paradigm for Modeling Families of Related Tasks
Konidaris, George (Duke University) | Doshi-Velez, Finale (Harvard Medical School)
The goal of transfer is to use knowledge obtained by solving one task to improvea robot's (or software agent's) performance in future tasks. In general, we do not expect this to work; for transfer to be feasible, there must be something in common between the source task(s) and goal task(s). The question at the core of the transfer learning enterprise is therefore: what makes two tasks related?, or more generally, how do you define a family of related tasks? Given a precise definition of how a particular family of tasks is related, we can formulate clear optimizationmethods for selecting source tasks and determining what knowledge should be imported from the source task(s), and how it should be used in the target task(s). This paper describes one model that has appeared in several different research scenarios where an agent is faced with afamily of tasks that have similar, but not identical, dynamics (or reward functions). For example, a human learning to play baseball may, over the course of their career,be exposed to several different bats, each with slightly different weights and lengths.A human who has learned to play baseball well with one bat would be expected to be able to pick up any similar bat and use it.Similarly, when learning to drive a car, one may learn in more than one car, and then be expected to be able to drive any make and model of car (within reasonablevariations) with little or no relearning. These examples are instances of exactly the kind of flexible, reliable,and sample-efficient behavior that we should be aiming to achieve in robotics applications. One way to model such a family of tasks is to posit that they are generated by asmall set of latent parameters (e.g., the length and weight of the bat, or parametersdescribing the various physical properties of the car's steering system and clutch) thatare fixed for each problem instance (e.g., for each bat, or car), but are not directlyobservable by the agent. Defining a distributionover these latent parameters results in a family of related tasks, and transferis feasible to the extent that the number of latent variables is small, the task dynamics(or reward function) vary smoothly with them, and to the extent to which they can eitherbe ignored or identified using transition data from the task.This model has appeared under several different names in the literature; we refer to it as a hidden-parameterMarkov decision process (or HIP-MDP).
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > North Carolina > Durham County > Durham (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)