Goto

Collaborating Authors

 Suay, Halit Bener


Using Causal Models for Learning from Demonstration

AAAI Conferences

Most learning from demonstration algorithms are implemented with a certain set of variables that are known to be important for the agent. The agent is hardcoded to use those variables for learning the task (or a set of parameters). In this work we try to understand the causal structure of a demonstrated task in order to find: which variables cause what other variables to change, and which variables are independent from the others. We used a realistic simulator to record a simple pick and place task demonstration data, and recovered different causal models using the data in Tetrad, a computer program that searches for causal and statistical models. Our findings show that it is possible to deduce irrelevant variables to a demonstrated task, using the recovered causal structure.


Using Human Demonstrations to Improve Reinforcement Learning

AAAI Conferences

This work introduces Human-Agent Transfer (HAT), an algorithm that combines transfer learning, learning from demonstration and reinforcement learning to achieve rapid learning and high performance in complex domains. Using experiments in a simulated robot soccer domain, we show that human demonstrations transferred into a baseline policy for an agent and refined using reinforcement learning significantly improve both learning time and policy performance. Our evaluation compares three algorithmic approaches to incorporating demonstration rule summaries into transfer learning, and studies the impact of demonstration quality and quantity. Our results show that all three transfer methods lead to statistically significant improvement in performance over learning without demonstration.