Goto

Collaborating Authors

Task-Motion Planning for Navigation in Belief Space

arXiv.org Artificial Intelligence

Task-Motion Planning for Navigation in Belief Space Antony Thomas, Fulvio Mastrogiovanni, and Marco Baglietto Abstract We present an integrated Task-Motion Planning (TMP) framework for navigation in large-scale environment. Autonomous robots operating in real world complex scenarios require planning in the discrete (task) space and the continuous (motion) space. In knowledge intensive domains, on the one hand, a robot has to reason at the highest-level, for example the regions to navigate to; on the other hand, the feasibility of the respective navigation tasks have to be checked at the execution level. This presents a need for motion-planning-aware task planners. We discuss a probabilistically complete approach that leverages this task-motion interaction for navigating in indoor domains, returning a plan that is optimal at the task-level. Furthermore, our framework is intended for motion planning under motion and sensing uncertainty, which is formally known as belief space planning. The underlying methodology is validated with a simulated office environment in Gazebo. In addition, we discuss the limitations and provide suggestions for improvements and future work. 1 Introduction Autonomous robots operating in complex real world scenarios require different levels of planning to execute their tasks. High-level (task) planning helps break down a given set of tasks into a sequence of sub-tasks. Actual execution of each of these sub-tasks would require low-level control actions to generate appropriate robot motions. In fact, the dependency between logical and geometrical aspects is pervasive in both task planning and execution.


Towards Multi-Robot Task-Motion Planning for Navigation in Belief Space

arXiv.org Artificial Intelligence

Autonomous robots operating in large knowledgeintensive domains require planning in the discrete (task) space and the continuous (motion) space. In knowledge-intensive domains, on the one hand, robots have to reason at the highestlevel, for example the regions to navigate to or objects to be picked up and their properties; on the other hand, the feasibility of the respective navigation tasks have to be checked at the controller execution level. Moreover, employing multiple robots offer enhanced performance capabilities over a single robot performing the same task. To this end, we present an integrated multi-robot task-motion planning framework for navigation in knowledge-intensive domains. In particular, we consider a distributed multi-robot setting incorporating mutual observations between the robots. The framework is intended for motion planning under motion and sensing uncertainty, which is formally known as belief space planning. The underlying methodology and its limitations are discussed, providing suggestions for improvements and future work. We validate key aspects of our approach in simulation.


MPTP: Motion-Planning-aware Task Planning for Navigation in Belief Space

arXiv.org Artificial Intelligence

We present an integrated Task-Motion Planning (TMP) framework for navigation in large-scale environments. Of late, TMP for manipulation has attracted significant interest resulting in a proliferation of different approaches. In contrast, TMP for navigation has received considerably less attention. Autonomous robots operating in real-world complex scenarios require planning in the discrete (task) space and the continuous (motion) space. In knowledge-intensive domains, on the one hand, a robot has to reason at the highest-level, for example, the objects to procure, the regions to navigate to in order to acquire them; on the other hand, the feasibility of the respective navigation tasks have to be checked at the execution level. This presents a need for motion-planning-aware task planners. In this paper, we discuss a probabilistically complete approach that leverages this task-motion interaction for navigating in large knowledge-intensive domains, returning a plan that is optimal at the task-level. The framework is intended for motion planning under motion and sensing uncertainty, which is formally known as belief space planning. The underlying methodology is validated in simulation, in an office environment and its scalability is tested in the larger Willow Garage world. A reasonable comparison with a work that is closest to our approach is also provided. We also demonstrate the adaptability of our approach by considering a building floor navigation domain. Finally, we also discuss the limitations of our approach and put forward suggestions for improvements and future work.


A Task-Motion Planning Framework Using Iteratively Deepened AND/OR Graph Networks

arXiv.org Artificial Intelligence

We present an approach for Task-Motion Planning (TMP) using Iterative Deepened AND/OR Graph Networks (TMP-IDAN) that uses an AND/OR graph network based novel abstraction for compactly representing the task-level states and actions. While retrieving a target object from clutter, the number of object re-arrangements required to grasp the target is not known ahead of time. To address this challenge, in contrast to traditional AND/OR graph-based planners, we grow the AND/OR graph online until the target grasp is feasible and thereby obtain a network of AND/OR graphs. The AND/OR graph network allows faster computations than traditional task planners. We validate our approach and evaluate its capabilities using a Baxter robot and a state-of-the-art robotics simulator in several challenging non-trivial cluttered table-top scenarios. The experiments show that our approach is readily scalable to increasing number of objects and different degrees of clutter.


Task Allocation for Multi-Robot Task and Motion Planning: a case for Object Picking in Cluttered Workspaces

arXiv.org Artificial Intelligence

We present an AND/OR graph-based, integrated multi-robot task and motion planning approach which (i) performs task allocation coordinating the activity of a given number of robots, and (ii) is capable of handling tasks which involve an a priori unknown number of object re-arrangements, such as those involved in retrieving objects from cluttered workspaces. Such situations may arise, for example, in search and rescue scenarios, while locating/picking a cluttered object of interest. The corresponding problem falls under the category of planning in clutter. One of the challenges while planning in clutter is that the number of object re-arrangements required to pick the target object is not known beforehand, in general. Moreover, such tasks can be decomposed in a variety of ways, since different cluttering object re-arrangements are possible to reach the target object. In our approach, task allocation and decomposition is achieved by maximizing a combined utility function. The allocated tasks are performed by an integrated task and motion planner, which is robust to the requirement of an unknown number of re-arrangement tasks. We demonstrate our results with experiments in simulation on two Franka Emika manipulators.