This paper addresses the problem of planning a safe (i.e., collision-free) trajectory from an initial state to a goal region when the obstacle space is a-priori unknown and is incrementally revealed online, e.g., through line-of-sight perception. Despite its ubiquitous nature, this formulation of motion planning has received relatively little theoretical investigation, as opposed to the setup where the environment is assumed known. A fundamental challenge is that, unlike motion planning with known obstacles, it is not even clear what an optimal policy to strive for is. Our contribution is threefold. First, we present a notion of optimality for safe planning in unknown environments in the spirit of comparative (as opposed to competitive) analysis, with the goal of obtaining a benchmark that is, at least conceptually, attainable. Second, by leveraging this theoretical benchmark, we derive a pseudo-optimal class of policies that can seamlessly incorporate any amount of prior or learned information while still guaranteeing the robot never collides. Finally, we demonstrate the practicality of our algorithmic approach in numerical experiments using a range of environment types and dynamics, including a comparison with a state of the art method. A key aspect of our framework is that it automatically and implicitly weighs exploration versus exploitation in a way that is optimal with respect to the information available.
The Stanford AI lab cart is a card-table sized mobile robot controlled remotely through a radio link, and equipped with a TV camera and transmitter. A computer has been programmed to drive the cart through cluttered indoor and outdoor spaces, gaining its knowledge about the world entirely from images broadcast by the onboard TV system.The cart deduces the three dimensional location of objects around it, and its own motion among them, by noting their apparent relative shifts in successive images obtained from the moving TV camera. It maintains a model of the location of the ground, and registers objects it has seen as potential obstacles if they are sufficiently above the surface, but not too high. It plans a path to a user-specified destination which avoids these obstructions. This plan is changed as the moving cart perceives new obstacles on its journey.The system is moderately reliable, but very slow. The cart moves about one meter every ten to fifteen minutes, in lurches. After rolling a meter, it stops, takes some pictures and thinks about them for a long time. Then it plans a new path, and executes a little of it, and pauses again.
One of the hallmarks of human reaching behavior is the ability to think and generate plans for movements in complex environments. In this paper we model planning to reach for targets in space using a self-organized process of mental rehearsals of movements, and simulate the process using a redundant robot arm that is capable of learning to reach for targets in space while avoiding obstacles. The learning process is inspired by infant motor babbling, and provides self-generated movement commands that activate correlated visual, spatial and motor/proprioceptive information, which are employed to learn forward and inverse kinematic models while moving in obstacle free space. To control the arm in complex environments with obstacles, the inverse model is constrained by the visual influence of the location of obstacles to generate a purely reactive obstacle avoidance controller; while the forward model is utilized in visually planning movements when reactive obstacle avoidance is insufficient. Reach planning utilizes the forward model to recall information in order to mentally rehearse reaches that escape local minima situations that exist in the solution landscape for the purely reactive obstacle avoidance controller and achieve a path around obstacles.
Roboticists are putting a tremendous amount of time and effort into finding the right combination of sensors and algorithms that will keep their drones from smashing into things. It's a very difficult problem: With a few exceptions, you've got small platforms that move fast and don't have the payload capability for the kind of sensors or computers that you really need to do real-time avoidance of things like trees or powerlines. And without obstacle avoidance, how will we ever have drones that can deliver new athletic socks to our doorstep in 30 minutes or less? At the University of Pennsylvania's GRASP Lab, where they've been working very very hard at getting quadrotors to fly through windows without running into them, Yash Mulgaonkar, Luis Guerrero-Bonilla, Anurag Makineni, and Professor Vijay Kumar have come up with what seems to be a much simpler solution for navigation and obstacle avoidance with swarms of small aerial robots: Give them a roll cage, and just let them run into whatever is in their way. This kind of "it'll be fine" philosophy is what you find in most small flying insects, like bees: They don't worry all that much about bumbling into stuff, or each other, they just kind of shrug it off and keep on going.
It used to be that even sophisticated mobile robots could be easily defeated by using (say) a table to block its way. The robot would sense the table, categorize it as an obstacle, try to plan a path around it, and then give up when its planner fails. This works because robots generally don't know what most objects are, or how they work, or what you can do with them: They just get turned into obstacles to be avoided, because in most cases, that's the easiest and safest thing to do. You can't normally use a table across a hallway to deter a human, because humans understand that tables are physical objects that can be moved, and the human will just pull the table out of the way and keep on going. Even if the table doesn't behave exactly the way we'd expect it to (like, one of the wheels is stuck), we can adapt, and figure it out.