Robots in the work place can perform hazardous or even 'impossible' tasks; e.g., toxic waste clean-up, desert and space exploration, and more. AI researchers are also interested in the intelligent processing involved in moving about and manipulating objects in the real world.
Topological maps represent the world as a graph of places with the arcs of the graph representing movements between places. Brooks (Brooks 1985) argues persuasively for the use of topological maps as a means of dealing with uncertainty in mobile robot navigation. Indeed, the idea of a map that contains no metric or geometric information, but only the notions of proximity and order, is enticing because such an approach eliminates the inevitable problems of dealing with movement uncertainty in mobile robots. Movement errors do not accumulate globally in topological maps as they do in maps with a global coordinate system since the robot only navigates locally, between places. Topological maps are also much more compact in their representation of space, in that they represent only certain places and not the entire world, in contrast to robots which use detailed a priori models of the world, such as (Kosaka & Kak 1992) and (Fennema & Hanson 1990). For these reasons, topological maps have become increasingly popular in mobile robotics.
Robots performing complex tasks in rich environments need very good perception modules in order to understand their situation and choose the best action. Robot planning systems have typically assumed that perception was so good that it could refresh the entire world model whenever the planning system needed it, or whenever anything in the world changed. Unfortunately, this assumption is completely unrealistic in many real-world domains because perception is far too difficult. Robots in these domains cannot use the traditional planner paradigm, but instead need a new system design that integrates reasoning with perception. Our research is aimed at showing how a robot can reason about perception, how task knowledge can be used to select perceptual targets, and how this selection dramatically reduces the computational cost of perception.
Last summer, AAAI sponsored a mobile robot competition in conjunction with the AAAI-92 conference in San Jose, California. Ten robots from across the country competed in the competition, with CARMEL from the University of Michigan finishing first. CARMEL is a Cybermotion K2A mobile platform with a ring of 24 sonar sensors and a single black and white CCD camera. For computing, CARMEL has three processors: one for motor control, one for sonar ring firing and one executing high-level routines such as obstacle avoidance and object recognition. All computation and power is contained entirely on-board.
In this paper I will describe Polly, a low cost visionbased robot that gives primitive tours. The system is very simple, robust and efficient, and runs on a hardware platform which could be duplicated for less than $lOK US. The system was built to explore how knowledge about the structure the environment can be used in a principled way to simplify both visual and motor processing. I will argue that very simple and efficient visual mechanisms can often be used to solve real problems in real (unmodified) environments in a principled manner. I will give an overview of the robot, discuss the properties of its environment, show how they can be used to simplify the design of the system, and discuss what lessons can drawn for the design of other systems.r
Inspired by the success of the distributed computing community in applying logics of knowledge and time to reasoning about distributed protocols, we aim for a similarly powerful and high-level abstraction when reasoning about control problems involving uncertainty. Here we concentrate on robot motion planning, with uncertainty in both control and sensing. This problem has already been well studied within the robotics community. Our contributions include the following: e We define a new, natural problem in this domain: obtaining a sound and complete termination condition, given initial and goal locations.
This paper describes a series of experiments that were performed on the Rocky III rob0t.l Rocky III is a small autonomous rover capable of navigating through rough outdoor terrain to a predesignated area, searching that area for soft soil, acquiring a soil sample, and depositing the sample in a container at its home base. The robot is programmed according to a reactive behaviorcontrol paradigm using the ALFA programming language. This style of programming produces robust autonomous performance while requiring significantly less computational resources than more traditional mobile robot control systems. The code for Rocky III runs on an & bit processor and uses about 10k of memory.
To operate in the real world robots must deal with errors in control and sensing. Achieving goals despite these errors requires complex motion planning and plan monitoring. We present a reduced version of the general problem and a complete planner that solves it in polynomial time. The basic concept underlying this planner is that of a landmark. Within the field of influence of a landmark, robot control and sensing are perfect.
We have been investigating the problem of controlling autonomous mobile robots in real world environments in a way which is reliable, task-directed (and taskable), and reactive to contingencies. The result of our research is a control architecture called ATLANTIS1 which combines a reactive control mechanism with a traditional planning system In this paper we describe a series of experiments using the architecture to control real-world and simulated real-world robots. We demonstrate that the architecture is capable of pursuing multiple goals in real time in a noisy, partially unpredictable environment. The central result of the work is to show how a traditional symbolic planner can be smoothly integrated into an embedded system We begin by reviewing the difficulties associated with embedding AI systems into real-world robots. Controlling autonomous mobile robots is hard for three fundamental reasons. First, the time available to decide what to do is limited. A mobile robot must operate at the pace of its environment.
This paper describes the results of using a reactive control software architecture for a mobile robot retrieval task in an outdoor environment. The software architecture draws from the ideas of universal plans and subsumption's layered control, producing reaction plans that exploit low-level competences as operators. The retrieval task requires the robot to locate and navigate to a donor agent, receive an object from the donor, and return. The implementation employs the concept of navigation templates (NaTs) to construct and update an obstacle space from which navigation plans are developed and continually revised. Selective perception is employed among an infrared beacon detector which determines the bearing to the donor, a real-time stereo vision system which obtains the range, and ultrasonic sensors which monitor for obstacles en route. The perception routines achieve a robust, controlled switching among sensor modes as defined by the reaction plan of the robot. In demonstration runs in an outdoor parking lot, the robot located the donor object while avoiding obstacles and executed the retrieval task among a variety of moving and stationary objects, including moving cars, without stopping its traversal motion. The architecture was previously reported to be effective for simple navigation and pick and place tasks using ultrasonics. Thus9 the results reported herein indicate that the architecture will scale well to more complex tasks using a variety of sensors.
A mechanical assembly is usually described by the geometry of its parts and the spatial relations defining their positions. This model does not directly provide the information needed to reason about assembly and disassembly motions. We propose another representation, the non-directional blocking graph, which describes the qualitative internal structure of the assembly. This representation makes explicit how the parts prevent each other from being moved in every possible direction of motion. It derives from the observation that the infinite set of motion directions can be partitioned into a finite arrangement of subsets such that over each subset the interferences among the parts remain qualitatively the same. We describe how this structure can be efficiently computed from the geometric model of the assembly. The (dis)assembly motions considered include infinitesimal and extended translations in two and three dimensions, and infinitesimal rigid motions.