Goto

Collaborating Authors

 Palacios, Hector


Improving Generalization in Task-oriented Dialogues with Workflows and Action Plans

arXiv.org Artificial Intelligence

Task-oriented dialogue is difficult in part because it involves understanding user intent, collecting information from the user, executing API calls, and generating helpful and fluent responses. However, for complex tasks one must also correctly do all of these things over multiple steps, and in a specific order. While large pre-trained language models can be fine-tuned end-to-end to create multi-step task-oriented dialogue agents that generate fluent text, our experiments confirm that this approach alone cannot reliably perform new multi-step tasks that are unseen during training. To address these limitations, we augment the dialogue contexts given to \textmd{text2text} transformers with known \textit{valid workflow names} and \textit{action plans}. Action plans consist of sequences of actions required to accomplish a task, and are encoded as simple sequences of keywords (e.g. verify-identity, pull-up-account, reset-password, etc.). We perform extensive experiments on the Action-Based Conversations Dataset (ABCD) with T5-small, base and large models, and show that such models: a) are able to more readily generalize to unseen workflows by following the provided plan, and b) are able to generalize to executing unseen actions if they are provided in the plan. In contrast, models are unable to fully accomplish new multi-step tasks when they are not provided action plan information, even when given new valid workflow names.


Egocentric Planning for Scalable Embodied Task Achievement

arXiv.org Artificial Intelligence

Embodied agents face significant challenges when tasked with performing actions in diverse environments, particularly in generalizing across object types and executing suitable actions to accomplish tasks. Furthermore, agents should exhibit robustness, minimizing the execution of illegal actions. In this work, we present Egocentric Planning, an innovative approach that combines symbolic planning and Object-oriented POMDPs to solve tasks in complex environments, harnessing existing models for visual perception and natural language processing. We evaluated our approach in ALFRED, a simulated environment designed for domestic tasks, and demonstrated its high scalability, achieving an impressive 36.07% unseen success rate in the ALFRED benchmark and winning the ALFRED challenge at CVPR Embodied AI workshop. Our method requires reliable perception and the specification or learning of a symbolic description of the preconditions and effects of the agent's actions, as well as what object types reveal information about others. It is capable of naturally scaling to solve new tasks beyond ALFRED, as long as they can be solved using the available skills. This work offers a solid baseline for studying end-to-end and hybrid methods that aim to generalize to new tasks, including recent approaches relying on LLMs, but often struggle to scale to long sequences of actions or produce robust plans for novel tasks.


Scaling up ML-based Black-box Planning with Partial STRIPS Models

arXiv.org Artificial Intelligence

A popular approach for sequential decision-making is to perform simulator-based search guided with Machine Learning (ML) methods like policy learning. On the other hand, model-relaxation heuristics can guide the search effectively if a full declarative model is available. In this work, we consider how a practitioner can improve ML-based black-box planning on settings where a complete symbolic model is not available. We show that specifying an incomplete STRIPS model that describes only part of the problem enables the use of relaxation heuristics. Our findings on several planning domains suggest that this is an effective way to improve ML-based black-box planning beyond collecting more data or tuning ML architectures.


Compiling Uncertainty Away in Conformant Planning Problems with Bounded Width

arXiv.org Artificial Intelligence

Conformant planning is the problem of finding a sequence of actions for achieving a goal in the presence of uncertainty in the initial state or action effects. The problem has been approached as a path-finding problem in belief space where good belief representations and heuristics are critical for scaling up. In this work, a different formulation is introduced for conformant problems with deterministic actions where they are automatically converted into classical ones and solved by an off-the-shelf classical planner. The translation maps literals L and sets of assumptions t about the initial situation, into new literals KL/t that represent that L must be true if t is initially true. We lay out a general translation scheme that is sound and establish the conditions under which the translation is also complete. We show that the complexity of the complete translation is exponential in a parameter of the problem called the conformant width, which for most benchmarks is bounded. The planner based on this translation exhibits good performance in comparison with existing planners, and is the basis for T0, the best performing planner in the Conformant Track of the 2006 International Planning Competition.



Reports of the AAAI 2012 Conference Workshops

AI Magazine

The AAAI-12 Workshop program was held Sunday and Monday, July 22–23, 2012 at the Sheraton Centre Toronto Hotel in Toronto, Ontario, Canada. The AAAI-12 workshop program included 9 workshops covering a wide range of topics in artificial intelligence. The titles of the workshops were Activity Context Representation: Techniques and Languages, AI for Data Center Management and Cloud Computing, Cognitive Robotics, Grounding Language for Physical Systems, Human Computation, Intelligent Techniques for Web Personalization and Recommendation, Multiagent Pathfinding, Neural-Symbolic Learning and Reasoning, Problem Solving Using Classical Planners, Semantic Cities. This article presents short summaries of those events.


Automatic Derivation of Finite-State Machines for Behavior Control

AAAI Conferences

Finite-state controllers represent an effective action selection mechanisms widely used in domains such as video-games and mobile robotics. In contrast to the policies obtained from MDPs and POMDPs, finite-state controllers have two advantages: they are often extremely compact, and they are general, applying to many problems and not just one. A limitation of finite-state controllers, on the other hand, is that they are written by hand. In this paper, we address this limitation, presenting a method for deriving controllers automatically from models. The models represent a class of contingent problems where actions are deterministic and some fluents are observable. The problem of deriving a controller is converted into a conformant problem that is solved using classical planners, taking advantage of a complete translation into classical planning introduced recently. The controllers derived are ‘general’ in the sense that they do not solve the original problem only, but many variations as well, including changes in the size of the problem or in the uncertainty of the initial situation and action effects. Several experiments illustrating the automatic derivation of controllers are presented.