Konidaris, George


Robust and Efficient Transfer Learning with Hidden-Parameter Markov Decision Processes

arXiv.org Machine Learning

We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings. Our new framework correctly models the joint uncertainty in the latent parameters and the state space. We also replace the original Gaussian Process-based model with a Bayesian Neural Network, enabling more scalable inference. Thus, we expand the scope of the HiP-MDP to applications with higher dimensions and more complex dynamics.


Mean Actor Critic

arXiv.org Machine Learning

We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning. MAC is a policy gradient algorithm that uses the agent's explicit representation of all action values to estimate the gradient of the policy, rather than using only the actions that were actually executed. This significantly reduces variance in the gradient updates and removes the need for a variance reduction baseline. We show empirical results on two control domains where MAC performs as well as or better than other policy gradient approaches, and on five Atari games, where MAC is competitive with state-of-the-art policy search algorithms.


An Analysis of Monte Carlo Tree Search

AAAI Conferences

Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in recent years. Despite the vast amount of research into MCTS, the effect of modifications on the algorithm, as well as the manner in which it performs in various domains, is still not yet fully known. In particular, the effect of using knowledge-heavy rollouts in MCTS still remains poorly understood, with surprising results demonstrating that better-informed rollouts often result in worse-performing agents. We present experimental evidence suggesting that, under certain smoothness conditions, uniformly random simulation policies preserve the ordering over action preferences. This explains the success of MCTS despite its common use of these rollouts to evaluate states. We further analyse non-uniformly random rollout policies and describe conditions under which they offer improved performance.


Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes

AAAI Conferences

An intriguing application of transfer learning emerges when tasks arise with similar, but not identical, dynamics. Hidden Parameter Markov Decision Processes (HiP-MDP) embed these tasks into a low-dimensional space; given the embedding parameters one can identify the MDP for a particular task. However, the original formulation of HiP-MDP had a critical flaw: the embedding uncertainty was modeled independently of the agent's state uncertainty, requiring an arduous training procedure. In this work, we apply a Gaussian Process latent variable model to jointly model the dynamics and the embedding, leading to a more elegant formulation, one that allows for better uncertainty quantification and thus more robust transfer.


Reinforcement Learning with Parameterized Actions

AAAI Conferences

We introduce a model-free algorithm for learning in Markov decision processes with parameterized actions—discrete actions with continuous parameters. At each step the agent must select both which action to use and which parameters to use with that action. We introduce the Q-PAMDP algorithm for learning in these domains, show that it converges to a local optimum, and compare it to direct policy search in the goal-scoring and Platform domains.


Reinforcement Learning with Parameterized Actions

arXiv.org Artificial Intelligence

We introduce a model-free algorithm for learning in Markov decision processes with parameterized actions-discrete actions with continuous parameters. At each step the agent must select both which action to use and which parameters to use with that action. We introduce the Q-PAMDP algorithm for learning in these domains, show that it converges to a local optimum, and compare it to direct policy search in the goal-scoring and Platform domains.


Probabilistic Planning for Decentralized Multi-Robot Systems

AAAI Conferences

Multi-robot systems are an exciting application domain for AI research and Dec-POMDPs, specifically. MacDec-POMDP methods can produce high-quality general solutions for realistic heterogeneous multi-robot coordination problems by automatically generating control and communication policies, given a model. In contrast to most existing multi-robot methods that are specialized to a particular problem class, our approach can synthesize policies that exploit any opportunities for coordination that are present in the problem, while balancing uncertainty, sensor information, and information about other agents.


A Summary of the Twenty-Ninth AAAI Conference on Artificial Intelligence

AI Magazine

The Twenty-Ninth AAAI Conference on Artificial Intelligence, (AAAI-15) was held in January 2015 in Austin, Texas (USA) The conference program was cochaired by Sven Koenig and Blai Bonet. This report contains reflective summaries of the main conference, the robotics program, the AI and robotics workshop, the virtual agent exhibition, the what's hot track, the competition panel, the senior member track, student and outreach activities, the student abstract and poster program, the doctoral consortium, the women's mentoring event, and the demonstrations program.


A Summary of the Twenty-Ninth AAAI Conference on Artificial Intelligence

AI Magazine

The Twenty-Ninth AAAI Conference on Artificial Intelligence, (AAAI-15) was held in January 2015 in Austin, Texas (USA) The conference program was cochaired by Sven Koenig and Blai Bonet. This report contains reflective summaries of the main conference, the robotics program, the AI and robotics workshop, the virtual agent exhibition, the what's hot track, the competition panel, the senior member track, student and outreach activities, the student abstract and poster program, the doctoral consortium, the women's mentoring event, and the demonstrations program.


Symbol Acquisition for Probabilistic High-Level Planning

AAAI Conferences

We introduce a framework that enables an agent to autonomously learn its own symbolic representation of a low-level, continuous environment. Propositional symbols are formalized as names for probability distributions, providing a natural means of dealing with uncertain representations and probabilistic plans. We determine the symbols that are sufficient for computing the probability with which a plan will succeed, and demonstrate the acquisition of a symbolic representation in a computer game domain.