desJardins, Marie


Planning with Abstract Markov Decision Processes

AAAI Conferences

Robots acting in human-scale environments must plan under uncertainty in large state–action spaces and face constantly changing reward functions as requirements and goals change. Planning under uncertainty in large state–action spaces requires hierarchical abstraction for efficient computation. We introduce a new hierarchical planning framework called Abstract Markov Decision Processes (AMDPs) that can plan in a fraction of the time needed for complex decision making in ordinary MDPs. AMDPs provide abstract states, actions, and transition dynamics in multiple layers above a base-level “flat” MDP. AMDPs decompose problems into a series of subtasks with both local reward and local transition functions used to create policies for subtasks. The resulting hierarchical planning method is independently optimal at each level of abstraction, and is recursively optimal when the local reward and transition functions are correct. We present empirical results showing significantly improved planning speed, while maintaining solution quality, in the Taxi domain and in a mobile-manipulation robotics problem. Furthermore, our approach allows specification of a decision-making model for a mobile-manipulation problem on a Turtlebot, spanning from low-level control actions operating on continuous variables all the way up through high-level object manipulation tasks.


Abstracting Complex Domains Using Modular Object-Oriented Markov Decision Processes

AAAI Conferences

We present an initial proposal for modular object-oriented MDPs, an extension of OO-MDPs that abstracts complex domains that are partially observable and stochastic with multiple goals. Modes reduce the curse of dimensionality by reducing the number of attributes, objects, and actions into only the features relevant for each goal. These modes may also be used as an abstracted domain to be transferred to other modes or to another domain.


A Summary of the Twenty-Ninth AAAI Conference on Artificial Intelligence

AI Magazine

The Twenty-Ninth AAAI Conference on Artificial Intelligence, (AAAI-15) was held in January 2015 in Austin, Texas (USA) The conference program was cochaired by Sven Koenig and Blai Bonet. This report contains reflective summaries of the main conference, the robotics program, the AI and robotics workshop, the virtual agent exhibition, the what's hot track, the competition panel, the senior member track, student and outreach activities, the student abstract and poster program, the doctoral consortium, the women's mentoring event, and the demonstrations program.


A Summary of the Twenty-Ninth AAAI Conference on Artificial Intelligence

AI Magazine

The Twenty-Ninth AAAI Conference on Artificial Intelligence, (AAAI-15) was held in January 2015 in Austin, Texas (USA) The conference program was cochaired by Sven Koenig and Blai Bonet. This report contains reflective summaries of the main conference, the robotics program, the AI and robotics workshop, the virtual agent exhibition, the what's hot track, the competition panel, the senior member track, student and outreach activities, the student abstract and poster program, the doctoral consortium, the women's mentoring event, and the demonstrations program.


Portable Option Discovery for Automated Learning Transfer in Object-Oriented Markov Decision Processes

AAAI Conferences

We introduce a novel framework for option discovery and learning transfer in complex domains that are represented as object-oriented Markov decision processes (OO-MDPs) [Diuk et al., 2008]. Our framework, Portable Option Discovery (POD), extends existing option discovery methods, and enables transfer across related but different domains by providing an unsupervised method for finding a mapping between object-oriented domains with different state spaces. The framework also includes heuristic approaches for increasing the efficiency of the mapping process. We present the results of applying POD to Pickett and Barto's [2002] PolicyBlocks and MacGlashan's [2013] Option-Based Policy Transfer in two application domains. We show that our approach can discover options effectively, transfer options among different domains, and improve learning performance with low computational overhead.


ACTIVE-ating Artificial Intelligence: Integrating Active Learning in an Introductory Course

AI Magazine

By restructuring the course into a format that was roughly half lecture and half small-group problem-solving, I was able to significantly increase student engagement, their understanding and retention of difficult concepts, and my own enjoyment in teaching the class.


ACTIVE-ating Artificial Intelligence: Integrating Active Learning in an Introductory Course

AI Magazine

his column describes my experience with using a new classroom space (the ACTIVE Center), which was designed to facilitate group-based active learning and problem solving, to teach an introductory artificial intelligence course. By restructuring the course into a format that was roughly half lecture and half small-group problem-solving, I was able to significantly increase student engagement, their understanding and retention of difficult concepts, and my own enjoyment in teaching the class.


Discovering Subgoals in Complex Domains

AAAI Conferences

We present ongoing research to develop novel option discovery methods for complex domains that are represented as Object-Oriented Markov Decision Processes (OO-MDPs) (Diuk, Cohen, and Littman, 2008). We describe Portable Multi-policy Option Discovery for Automated Learning (P-MODAL), an initial framework that extends Pickett and Barto’s (2002) PolicyBlocks approach to OO-MDPs. We also discuss future work that will use additional representations and techniques to handle scalability and learning challenges.


Autonomous Hierarchical POMDP Planning from Low-Level Sensors

AAAI Conferences

There are currently no strong methods for planning in a stochastic domain, with low-level sensors that are limited and possibly inaccurate. Existing architectures have flaws that make their use in a real-world environment impractical. We propose an architecture that utilizes POMDPs to create a hierarchical planning system. This system is capable of developing macro-actions that can expedite planning on a large scale, and can learn new plans quickly and efficiently, without deliberate design by the programmer.


Representing and Reasoning With Probabilistic Knowledge: A Bayesian Approach

arXiv.org Artificial Intelligence

PAGODA (Probabilistic Autonomous Goal-Directed Agent) is a model for autonomous learning in probabilistic domains [desJardins, 1992] that incorporates innovative techniques for using the agent's existing knowledge to guide and constrain the learning process and for representing, reasoning with, and learning probabilistic knowledge. This paper describes the probabilistic representation and inference mechanism used in PAGODA. PAGODA forms theories about the effects of its actions and the world state on the environment over time. These theories are represented as conditional probability distributions. A restriction is imposed on the structure of the theories that allows the inference mechanism to find a unique predicted distribution for any action and world state description. These restricted theories are called uniquely predictive theories. The inference mechanism, Probability Combination using Independence (PCI), uses minimal independence assumptions to combine the probabilities in a theory to make probabilistic predictions.