Goto

Collaborating Authors

Gianni

AAAI Conferences

For tasks that need to be accomplished in unconstrained environments, as in the case of Urban Search and Rescue (USAR), human-robot collaboration is considered as an indispensable component. Collaboration is based on accurate models of robot and human perception consistent with one another, so that exchange of information critical to the accomplishment of a task is performed efficiently and in a simplified fashion to minimize the interaction overhead. In this paper, we highlight the features of a human-robot team, i.e. how robot perception may be combined with human perception based on a task-driven direction for USAR. We elaborate on the design of the components of a mixed-initiative system wherein a task assigned to the robot is planned and executed jointly with the human operator as a result of their interaction. Our description is solidified by demonstrating the application of mixed-initiative planning in a number of examples related to the morphological adaptation of the rescue robot.


Cooperative, Dynamics-based, and Abstraction-Guided Multi-robot Motion Planning

Journal of Artificial Intelligence Research

This paper presents an effective, cooperative, and probabilistically-complete multi-robot motion planner that enables each robot to move to a desired location while avoiding collisions with obstacles and other robots. The approach takes into account not only the geometric constraints arising from collision avoidance, but also the differential constraints imposed by the motion dynamics of each robot. This makes it possible to generate collision-free and dynamically-feasible trajectories that can be executed in the physical world. The salient aspect of the approach is the coupling of sampling-based motion planning to handle the complexity arising from the obstacles and robot dynamics with multi-agent search to find solutions over a suitable discrete abstraction. The discrete abstraction is obtained by constructing roadmaps to solve a relaxed problem that accounts for the obstacles but not the dynamics.


Cooperative, Dynamics-based, and Abstraction-Guided Multi-robot Motion Planning

Journal of Artificial Intelligence Research

This paper presents an effective, cooperative, and probabilistically-complete multi-robot motion planner that enables each robot to move to a desired location while avoiding collisions with obstacles and other robots. The approach takes into account not only the geometric constraints arising from collision avoidance, but also the differential constraints imposed by the motion dynamics of each robot. This makes it possible to generate collision-free and dynamically-feasible trajectories that can be executed in the physical world.The salient aspect of the approach is the coupling of sampling-based motion planning to handle the complexity arising from the obstacles and robot dynamics with multi-agent search to find solutions over a suitable discrete abstraction. The discrete abstraction is obtained by constructing roadmaps to solve a relaxed problem that accounts for the obstacles but not the dynamics. Sampling-based motion planning expands a motion tree in the composite state space of all the robots by adding collision-free and dynamically-feasible trajectories as branches. Efficiency is obtained by using multi-agent search to find non-conflicting routes over the discrete abstraction which serve as heuristics to guide the motion-tree expansion. When little or no progress is made, the routes are penalized and the multi-agent search is invoked again to find alternative routes. This synergistic coupling makes it possible to effectively plan collision-free and dynamically-feasible motions that enable each robot to reach its goal. Experiments using vehicle models with nonlinear dynamics operating in complex environments, where cooperation among robots is required, show significant speedups over related work.


Custom Processor Speeds Up Robot Motion Planning by Factor of 1,000

IEEE Spectrum Robotics

If you've ever seen a live robot manipulation demo, you've almost certainly noticed that the robot probably spends a lot of time looking like it's not doing anything. It's tempting to say that the robot is "thinking" when this happens, and that might even be mostly correct: odds are that you're watching some poor motion-planning algorithm try and figure out how to get the robot's arm and gripper to do what it's supposed to do without running into anything. This motion planning process is both one of the most important skills a robot can have (since it's necessary for robots to "do stuff"), and also one of the most time and processor intensive. At the RSS 2016 conference this week, researchers from the Duke Robotics group at Duke University in Durham, N.C., are presenting a paper about "Robot Motion Planning on a Chip," in which they describe how they can speed up motion planning by three orders of magnitude while using 20 times less power. How? Rather than using general purpose CPUs and GPUs, they instead developed a custom processor that can run collision checking across an entire 3D grid all at once.


Task-assisted Motion Planning in Partially Observable Domains

arXiv.org Artificial Intelligence

Antony Thomas and Sunny Amatya † and Fulvio Mastrogiovanni and Marco Baglietto Abstract -- We present an integrated T ask-Motion Planning framework for robot navigation in belief space. Autonomous robots operating in real world complex scenarios require planning in the discrete (task) space and the continuous (motion) space. T o this end, we propose a framework for integrating belief space reasoning within a hybrid task planner . The expressive power of PDDL combined with heuristic-driven semantic attachments performs the propagated and posterior belief estimates while planning. The underlying methodology for the development of the combined hybrid planner is discussed, providing suggestions for improvements and future work. I NTRODUCTION Autonomous robots operating in complex real world scenarios require different levels of planning to execute their tasks. High-level (task) planning helps break down a given set of tasks into a sequence of sub-tasks, actual execution of each of these sub-tasks would require low-level control actions to generate appropriate robot motions. In fact, the dependency between logical and geometrical aspects is pervasive in both task planning and execution. Hence, planning should be performed in the task-motion or the discrete-continuous space. In recent years, combining high-level task planning with low-level motion planning has been a subject of great interest among the Robotics and Artificial Intelligence (AI) community.