Goto

Collaborating Authors

 ontable


Unifying Inference-Time Planning Language Generation

Kagitha, Prabhu Prakash, Sun, Bo, Desai, Ishan, Zhu, Andrew, Huang, Cassie, Li, Manling, Li, Ziyang, Zhang, Li

arXiv.org Artificial Intelligence

A line of work in planning uses LLM not to generate a plan, but to generate a formal representation in some planning language, which can be input into a symbolic solver to deterministically find a plan. While showing improved trust and promising performance, dozens of recent publications have proposed scattered methods on a variety of benchmarks under different experimental settings. We attempt to unify the inference-time LLM-as-formalizer methodology for classical planning by proposing a unifying framework based on intermediate representations. We thus systematically evaluate more than a dozen pipelines that subsume most existing work, while proposing novel ones that involve syntactically similar but high resource intermediate languages (such as a Python wrapper of PDDL). We provide recipes for planning language generation pipelines, draw a series of conclusions showing the efficacy of their various components, and evidence their robustness against problem complexity.


Argument Schemes and Dialogue for Explainable Planning

Mahesar, Quratul-ain, Parsons, Simon

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) is being increasingly deployed in practical applications. However, there is a major concern whether AI systems will be trusted by humans. In order to establish trust in AI systems, there is a need for users to understand the reasoning behind their solutions. Therefore, systems should be able to explain and justify their output. In this paper, we propose an argument scheme-based approach to provide explanations in the domain of AI planning. We present novel argument schemes to create arguments that explain a plan and its key elements; and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Furthermore, we present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.


Argument Schemes for Explainable Planning

Mahesar, Quratul-ain, Parsons, Simon

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) is being increasingly used to develop systems that produce intelligent solutions. However, there is a major concern that whether the systems built will be trusted by humans. In order to establish trust in AI systems, there is a need for the user to understand the reasoning behind their solutions and therefore, the system should be able to explain and justify its output. In this paper, we use argumentation to provide explanations in the domain of AI planning. We present argument schemes to create arguments that explain a plan and its components; and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Finally, we present some properties of the plan arguments.


How to Plan When Being Deliberately Misled

Pagnucco, Maurice (The University of New South Wales) | Rajaratnam, David (The University of New South Wales) | Strass, Hannes (University of Leipzig) | Thielscher, Michael (The University of New South Wales)

AAAI Conferences

Reasoning agents are often faced with the need to robustly deal with erroneous information. When a robot given the task of returning with the red cup from the kitchen table arrives in the kitchen to find no red cup but instead notices a blue cup and a red plate on the table, what should it do? The best course of action is to attempt to salvage the situation by relying on its preferences to return with one of the objects available. We provide a solution to this problem using the Situation Calculus extended with a notion of belief. We then provide an efficient practical implementation by mapping this formalism into default rules for which we have an implemented solver.


Partial-Order Planning with Concurrent Interacting Actions

Boutilier, C., Brafman, R. I.

Journal of Artificial Intelligence Research

In order to generate plans for agents with multiple actuators, agent teams, or distributed controllers, we must be able to represent and plan using concurrent actions with interacting effects. This has historically been considered a challenging task requiring a temporal planner with the ability to reason explicitly about time. We show that with simple modifications, the STRIPS action representation language can be used to represent interacting actions. Moreover, algorithms for partial-order planning require only small modifications in order to be applied in such multiagent domains. We demonstrate this fact by developing a sound and complete partial-order planner for planning with concurrent interacting actions, POMP, that extends existing partial-order planners in a straightforward way. These results open the way to the use of partial-order planners for the centralized control of cooperative multiagent systems.