Goto

Collaborating Authors

Robot Planning & Action


SafePicking: Learning Safe Object Extraction via Object-Level Mapping

arXiv.org Artificial Intelligence

Robots need object-level scene understanding to manipulate objects while reasoning about contact, support, and occlusion among objects. Given a pile of objects, object recognition and reconstruction can identify the boundary of object instances, giving important cues as to how the objects form and support the pile. In this work, we present a system, SafePicking, that integrates object-level mapping and learning-based motion planning to generate a motion that safely extracts occluded target objects from a pile. Planning is done by learning a deep Q-network that receives observations of predicted poses and a depth-based heightmap to output a motion trajectory, trained to maximize a safety metric reward. Our results show that the observation fusion of poses and depth-sensing gives both better performance and robustness to the model. We evaluate our methods using the YCB objects in both simulation and the real world, achieving safe object extraction from piles.


Salzman

AAAI Conferences

We consider the motion-planning problem of planning a collision-free path of a robot in the presence of risk zones. The robot is allowed to travel in these zones but is penalized in a super-linear fashion for consecutive accumulative time spent there. We suggest a natural cost function that balances path length and risk-exposure time. Specifically, we consider the discrete setting where we are given a graph, or a roadmap, and we wish to compute the minimal-cost path under this cost function. Interestingly, paths defined using our cost function do not have an optimal substructure.


Shivashankar

AAAI Conferences

Low-level motion planning techniques must be combined with high-level task planning formalisms in order to generate realistic plans that can be carried out by humans and robots. Previous attempts to integrate these two planning formalisms mostly used either Classical Planning or HTN Planning. Recently, we developed Hierarchical Goal Networks (HGNs), a new hierarchical planning formalism that combines the advantages of HTN and Classical planning, while mitigating some of the disadvantages of each individual formalism. In this paper, we describe our ongoing research on designing a planning formalism and algorithm that exploits the unique features of HGNs to better integrate task and motion planning. We also describe how the proposed planning framework can be instantiated to solve assembly planning problems involving human-robot teams.


Lee

AAAI Conferences

As robot algorithms for manipulation and navigation advance and robot hardware is becoming more robust and readily available, industry demands robots to perform more sophisticated tasks in our homes and factories. For many years, direct teleoperation was the most common and traditional form of control for robots. However, due to the complexity of robot motion, human operators must focus most of their attention on solving low-level motion control which leads to their heightened cognitive load. In this abstract, we propose a goal-directed approach to programming robots by providing a tool to model the world and provide goal states for a given task. Operators will be able to set the initial positions of objects and their affordances along with their goal positions by imposing three dimensional (3D) templates on point clouds. Robots will solve the given task using the combination of task and motion planning algorithms.


Srivastava

AAAI Conferences

Domain models for sequential decision making typically represent abstract versions of real-world systems. In practice, such representations are compact, easy to maintain, and affort faster solution times. Unfortunately, as we show in this paper, simple ways of abstracting solvable real-world problems may lead to models whose solutions are incorrect with respect to the real-world problem. There is some evidence that such limitations have restricted the applicability of SDM technology in the real world, as is apparent in the case of task and motion planning in robotics. We show that the situation can be ameliorated by a combination of increased expressive power---for example, allowing angelic nondeterminism in action effects---and new kinds of algorithmic approaches designed to produce correct solutions from initially incorrect or non-Markovian abstract models.


Waldhart

AAAI Conferences

Task planning and motion planning softwares operate on very different representations, making it hard to link them. We propose a software filling that gap in a generic way, mainly by choosing the best way to physically perform a task according to a higher level plan and taking explicitly into account human comfort and preferences.


Learning to reason about and to act on physical cascading events

arXiv.org Artificial Intelligence

Reasoning and interacting with dynamic environments is a fundamental problem in AI, but it becomes extremely challenging when actions can trigger cascades of cross-dependent events. We introduce a new supervised learning setup called {\em Cascade} where an agent is shown a video of a physically simulated dynamic scene, and is asked to intervene and trigger a cascade of events, such that the system reaches a "counterfactual" goal. For instance, the agent may be asked to "Make the blue ball hit the red one, by pushing the green ball". The agent intervention is drawn from a continuous space, and cascades of events makes the dynamics highly non-linear. We combine semantic tree search with an event-driven forward model and devise an algorithm that learns to search in semantic trees in continuous spaces. We demonstrate that our approach learns to effectively follow instructions to intervene in previously unseen complex scenes. It can also reason about alternative outcomes, when provided an observed cascade of events.


You Only Demonstrate Once: Category-Level Manipulation from Single Visual Demonstration

arXiv.org Artificial Intelligence

Promising results have been achieved recently in category-level manipulation that generalizes across object instances. Nevertheless, it often requires expensive real-world data collection and manual specification of semantic keypoints for each object category and task. Additionally, coarse keypoint predictions and ignoring intermediate action sequences hinder adoption in complex manipulation tasks beyond pick-and-place. This work proposes a novel, category-level manipulation framework that leverages an object-centric, category-level representation and model-free 6 DoF motion tracking. The canonical object representation is learned solely in simulation and then used to parse a category-level, task trajectory from a single demonstration video. The demonstration is reprojected to a target trajectory tailored to a novel object via the canonical representation. During execution, the manipulation horizon is decomposed into long-range, collision-free motion and last-inch manipulation. For the latter part, a category-level behavior cloning (CatBC) method leverages motion tracking to perform closed-loop control. CatBC follows the target trajectory, projected from the demonstration and anchored to a dynamically selected category-level coordinate frame. The frame is automatically selected along the manipulation horizon by a local attention mechanism. This framework allows to teach different manipulation strategies by solely providing a single demonstration, without complicated manual programming. Extensive experiments demonstrate its efficacy in a range of challenging industrial tasks in high-precision assembly, which involve learning complex, long-horizon policies. The process exhibits robustness against uncertainty due to dynamics as well as generalization across object instances and scene configurations.


Behavior Tree-Based Asynchronous Task Planning for Multiple Mobile Robots using a Data Distribution Service

arXiv.org Artificial Intelligence

In this study, we propose task planning framework for multiple robots that builds on a behavior tree (BT). BTs communicate with a data distribution service (DDS) to send and receive data. Since the standard BT derived from one root node with a single tick is unsuitable for multiple robots, a novel type of BT action and improved nodes are proposed to control multiple robots through a DDS asynchronously. To plan tasks for robots efficiently, a single task planning unit is implemented with the proposed task types. The task planning unit assigns tasks to each robot simultaneously through a single coalesced BT. If any robot falls into a fault while performing its assigned task, another BT embedded in the robot is executed; the robot enters the recovery mode in order to overcome the fault. To perform this function, the action in the BT corresponding to the task is defined as a variable, which is shared with the DDS so that any action can be exchanged between the task planning unit and robots. To show the feasibility of our framework in a real-world application, three mobile robots were experimentally coordinated for them to travel alternately to four goal positions by the proposed single task planning unit via a DDS.


3D Printed Research Robotics Platform Runs Remotely

#artificialintelligence

The Open Dynamic Robot Initiative Group is a collaboration between five robotics-oriented research groups, based in three countries, with the aim to build an Open Source robotics platform based around the torque-control method. Leveraging 3D printing, a few custom PCBs, and off-the-shelf parts, there is a low-barrier to entry and much lower cost compared to similar robots. The eagle-eyed will note that this is only a development platform, and all of the higher level control is off-machine, hosted by a separate PC. What's interesting here, is just how low-level the robot actually is. The motion hardware is purely a few BLDC motors driven by field-orientated control (FOC) driver units, a wireless controller and some batteries.