From Vague Instructions to Task Plans: A Feedback-Driven HRC Task Planning Framework based on LLMs

Shervedani, Afagh Mehri, Walter, Matthew R., Zefran, Milos

arXiv.org Artificial Intelligence 

-- Recent advances in large language models (LLMs) have demonstrated their potential as planners in human-robot collaboration (HRC) scenarios, offering a promising alternative to traditional planning methods. LLMs, which can generate structured plans by reasoning over natural language inputs, have the ability to generalize across diverse tasks and adapt to human instructions. This paper investigates the potential of LLMs to facilitate planning in the context of human-robot collaborative tasks, with a focus on their ability to reason from high-level, vague human inputs, and fine-tune plans based on real-time feedback. We propose a novel hybrid framework that combines LLMs with human feedback to create dynamic, context-aware task plans. Our work also highlights how a single, concise prompt can be used for a wide range of tasks and environments, overcoming the limitations of long, detailed structured prompts typically used in prior studies. By integrating user preferences into the planning loop, we ensure that the generated plans are not only effective but aligned with human intentions. Planning is a fundamental aspect of robotics, enabling autonomous agents to generate sequences of actions to achieve specific goals. Traditional planning methods in the context of human-robot collaboration (HRC) and assistive robots can be broadly categorized into two main types: rule-based planners and learning-based planners . Rule-based planners rely on predefined heuristics and symbolic representations, making them interpretable at the expense of not being able to adapt to complex or dynamic environments. In contrast, learning-based planners, particularly those utilizing deep reinforcement learning, learn to generate plans from experience in an adaptive manner.