Goto

Collaborating Authors

 Muise, Christian


PRP Rebooted: Advancing the State of the Art in FOND Planning

arXiv.org Artificial Intelligence

Fully Observable Non-Deterministic (FOND) planning is a variant of classical symbolic planning in which actions are nondeterministic, with an action's outcome known only upon execution. It is a popular planning paradigm with applications ranging from robot planning to dialogue-agent design and reactive synthesis. Over the last 20 years, a number of approaches to FOND planning have emerged. In this work, we establish a new state of the art, following in the footsteps of some of the most powerful FOND planners to date. Our planner, PR2, decisively outperforms the four leading FOND planners, at times by a large margin, in 17 of 18 domains that represent a comprehensive benchmark suite. Ablation studies demonstrate the impact of various techniques we introduce, with the largest improvement coming from our novel FOND-aware heuristic.


Automated Planning Techniques for Elementary Proofs in Abstract Algebra

arXiv.org Artificial Intelligence

This paper explores the application of automated planning to automated theorem proving, which is a branch of automated reasoning concerned with the development of algorithms and computer programs to construct mathematical proofs. In particular, we investigate the use of planning to construct elementary proofs in abstract algebra, which provides a rigorous and axiomatic framework for studying algebraic structures such as groups, rings, fields, and modules. We implement basic implications, equalities, and rules in both deterministic and non-deterministic domains to model commutative rings and deduce elementary results about them. The success of this initial implementation suggests that the well-established techniques seen in automated planning are applicable to the relatively newer field of automated theorem proving. Likewise, automated theorem proving provides a new, challenging domain for automated planning.


Egocentric Planning for Scalable Embodied Task Achievement

arXiv.org Artificial Intelligence

Embodied agents face significant challenges when tasked with performing actions in diverse environments, particularly in generalizing across object types and executing suitable actions to accomplish tasks. Furthermore, agents should exhibit robustness, minimizing the execution of illegal actions. In this work, we present Egocentric Planning, an innovative approach that combines symbolic planning and Object-oriented POMDPs to solve tasks in complex environments, harnessing existing models for visual perception and natural language processing. We evaluated our approach in ALFRED, a simulated environment designed for domestic tasks, and demonstrated its high scalability, achieving an impressive 36.07% unseen success rate in the ALFRED benchmark and winning the ALFRED challenge at CVPR Embodied AI workshop. Our method requires reliable perception and the specification or learning of a symbolic description of the preconditions and effects of the agent's actions, as well as what object types reveal information about others. It is capable of naturally scaling to solve new tasks beyond ALFRED, as long as they can be solved using the available skills. This work offers a solid baseline for studying end-to-end and hybrid methods that aim to generalize to new tasks, including recent approaches relying on LLMs, but often struggle to scale to long sequences of actions or produce robust plans for novel tasks.


Classical Planning in Deep Latent Space

Journal of Artificial Intelligence Research

Current domain-independent, classical planners require symbolic models of the problem domain and instance as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems such as planners. We propose Latplan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), Latplan learns a complete propositional PDDL action model of the environment. Later, when a pair of images representing the initial and the goal states (planning inputs) is given, Latplan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. We evaluate Latplan using image-based versions of 6 planning domains: 8-puzzle, 15-Puzzle, Blocksworld, Sokoban and Two variations of LightsOut.


Efficient Multi-agent Epistemic Planning: Teaching Planners About Nested Belief

arXiv.org Artificial Intelligence

In the absence of prescribed coordination, it is often necessary for individual agents to synthesize their own plans, taking into account not only their own capabilities and beliefs about the world but also their beliefs about other agents, including what each of the agents will come to believe as the consequence of the actions of others. To illustrate, consider the scenario where Larry and Moe meet on a regular basis at the local diner to swap the latest gossip. Larry has come to know that Nancy (Larry's daughter) has just received a major promotion in her job, but unbeknownst to him, Moe has already learned this bit of information through the grapevine. Before they speak, both believe Nancy is getting a promotion, Larry believes Moe is unaware of this (and consequently wishes to share the news), and Moe assumes Larry must already be aware of the promotion but is unaware of Moe's own knowledge of the situation. Very quickly we can see how the nesting of (potentially incorrect) belief can be a complicated and interesting setting to model. In this paper, we examine the problem of synthesizing plans in such settings. In particular, given a finite set of agents, each with: (1) (possibly incomplete and incorrect) beliefs about the world and about the beliefs of other agents; and (2) differing capabilities including the ability to perform actions whose outcomes are unknown to other agents; we are interested in synthesizing a plan to achieve a goal condition. Planning is at the belief level and as such, while we consider the execution of actions that can change the state of the world (ontic actions) as well as an agent's state of knowledge or belief (epistemic or more accurately doxastic actions, including communication actions), all outcomes are with respect to belief.


Classical Planning in Deep Latent Space

arXiv.org Artificial Intelligence

Current domain-independent, classical planners require symbolic models of the problem domain and instance as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems such as planners. We propose Latplan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), Latplan learns a complete propositional PDDL action model of the environment. Later, when a pair of images representing the initial and the goal states (planning inputs) is given, Latplan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. We evaluate Latplan using image-based versions of 6 planning domains: 8-puzzle, 15-Puzzle, Blocksworld, Sokoban and Two variations of LightsOut.


Learning Neural-Symbolic Descriptive Planning Models via Cube-Space Priors: The Voyage Home (to STRIPS)

arXiv.org Artificial Intelligence

E.g., its search space was shown to be compatible with symbolic Goal Recognition [Amado et al., 2018]. We achieved a new milestone in the difficult task One major drawback of the previous work was that it used of enabling agents to learn about their environment a non-descriptive, black-box neural model as the successor autonomously. Our neuro-symbolic architecture is generator. Not only that such a black-box model is incompatible trained end-to-end to produce a succinct and effective with the existing heuristic search techniques, but also, discrete state transition model from images since a neural network is able to model a very complex function, alone. Our target representation (the Planning Domain its direct translation into a compact logical formula via Definition Language) is already in a form that a rule-based transfer learning method turned out futile [Asai, off-the-shelf solvers can consume, and opens the 2019a]: The model complexity causes an exponentially large door to the rich array of modern heuristic search grounded action model that cannot be processed by the modern capabilities. We demonstrate how the sophisticated classical planners. Thus, obtaining the descriptive action innate prior we place on the learning process significantly models from the raw observations with minimal human interference reduces the complexity of the learned representation, is the next key milestone for expanding the scope of and reveals a connection to the graphtheoretic applying Automated Planning to the raw unstructured inputs.


Explainability of Intelligent Transportation Systems using Knowledge Compilation: a Traffic Light Controller Case

arXiv.org Artificial Intelligence

Usage of automated controllers which make decisions on an environment are widespread and are often based on black-box models. We use Knowledge Compilation theory to bring explainability to the controller's decision given the state of the system. For this, we use simulated historical state-action data as input and build a compact and structured representation which relates states with actions. We implement this method in a Traffic Light Control scenario where the controller selects the light cycle by observing the presence (or absence) of vehicles in different regions of the incoming roads.


Planning for Goal-Oriented Dialogue Systems

arXiv.org Artificial Intelligence

Generating complex multi-turn goal-oriented dialogue agents is a difficult problem that has seen a considerable focus from many leaders in the tech industry, including IBM, Google, Amazon, and Microsoft. This is in large part due to the rapidly growing market demand for dialogue agents capable of goal-oriented behaviour. Due to the business process nature of these conversations, end-to-end machine learning systems are generally not a viable option, as the generated dialogue agents must be deployable and verifiable on behalf of the businesses authoring them. In this work, we propose a paradigm shift in the creation of goal-oriented complex dialogue systems that dramatically eliminates the need for a designer to manually specify a dialogue tree, which nearly all current systems have to resort to when the interaction pattern falls outside standard patterns such as slot filling. We propose a declarative representation of the dialogue agent to be processed by state-of-the-art planning technology. Our proposed approach covers all aspects of the process; from model solicitation to the execution of the generated plans/dialogue agents. Along the way, we introduce novel planning encodings for declarative dialogue synthesis, a variety of interfaces for working with the specification as a dialogue architect, and a robust executor for generalized contingent plans. We have created prototype implementations of all components, and in this paper, we further demonstrate the resulting system empirically.


Planning with Explanatory Actions: A Joint Approach to Plan Explicability and Explanations in Human-Aware Planning

arXiv.org Artificial Intelligence

In this work, we formulate the process of generating explanations as model reconciliation for planning problems as one of planning with explanatory actions. We show that these problems could be better understood within the framework of epistemic planning and that, in fact, most earlier works on explanation as model reconciliation correspond to tractable subsets of epistemic planning problems. We empirically show how our approach is computationally more efficient than existing techniques for explanation generation and also discuss how this particular approach could be extended to capture most of the existing variants of explanation as model reconciliation. We end the paper with a discussion of how this formulation could be extended to generate novel explanatory behaviors.