Benjamin J. Kuipers and Tad S. Levitt

AI Magazine

In a large-scale space, structure is at a significantly larger scale than the observations available at an instant To learn the structure of a large-scale space from observations, the observer must build a cognitive map of the environment by integrating observations over an extended period of time, inferring spatial structure from perceptions and the effects of actions The cognitive map representation of largescale space must account for a mapping, or learning structure from observations, and navigation, or creating and executing a plan to travel from one place to another Approaches to date tend to be fragile either because they don't build maps; or because they assume nonlocal observations, such as those available in preexisting maps or global coordinate systems, including active Thus, to learn the large-scale structure of the space, the traveler must necessarily build a cognitive map of the environment by integrating observations over extended periods of time, inferring spatial structure from perceptions and the effects of actions. Large-scale space and the corresponding cognitive map representation cannot be defined independent of sensory perceptions or motor actions used to observe and move about in this environment For example, a work bench observed by a laser-bearing robot is not a large-scale space, but the moon is a large-scale space relative to a land-roving robot. A microchip is not large scale relative to an optical inspection system, but a grasshopper ganglion is a large-scale space when observed by an electron microscope. Inverse trigonometric operations and scalar multiplication require ratio data, in which a numeric value is calibrated with respect to a true zero. Trigonometric operations can require only interval data on angles, where differences are well defined, but absolute angles are not required.


Navigation and Mapping in Large Scale Space

AI Magazine

In a large-scale space, structure is at a significantly larger scale than the observations available at an instant. To learn the structure of a large-scale space from observations, the observer must build a cognitive map of the environment by integrating observations over an extended period of time, inferring spatial structure from perceptions and the effects of actions. The cognitive map representation of large-scale space must account for a mapping, or learning structure from observations, and navigation, or creating and executing a plan to travel from one place to another. Approaches to date tend to be fragile either because they don't build maps; or because they assume nonlocal observations, such as those available in preexisting maps or global coordinate systems, including active landmark beacons and geo-locating satellites. We propose that robust navigation and mapping systems for large-scale space can be developed by adhering to a natural, four-level semantic hierarchy of descriptions for representation, planning, and execution of plans in large-scale space. The four levels are sensorimotor interaction, procedural behaviors, topological mapping, and metric mapping. Effective systems represent the environment, relative to sensors, at all four levels and formulate robust system behavior by moving flexibly between representational levels at run time. We demonstrate our claims in three implemented models: Tour, the Qualnav system simulator, and the NX robot.


Navigating with the Tekkotsu Pilot

AAAI Conferences

Tekkotsu is a free, open source software framework for high-level robot programming. We describe enhancements to Tekkotsu's navigation component, the Pilot, to incorporate a particle filter for localization and an RRT-based path planner for obstacle avoidance. This allows us to largely automate the robot's navigation behavior using a combination of odometry and landmark-based localization. Beginning robot programmers need only indicate a destination in Tekkotsu's world map and the Pilot will take the robot there. The software has been tested both in simulation and on Calliope, a new educational robot developed in the Tekkotsu lab in collaboration with RoPro Design, Inc..


MinDART: A Multi-Robot Search & Retrieval System

AAAI Conferences

We are interested in studying how environmental and control factors affect the performance of a homogeneous multi-robot team doing a search and retrieval task. We have constructed a group of inexpensive robots called the Minnesota Distributed Autonomous Robot Team (MinDART) which use simple sensors and actuators to complete their tasks. We have upgraded these robots with the CMUCam, an inexpensive camera system that runs a color segmentation algorithm. The camera allows the robots to localize themselves as well as visually recognize other robots. We analyze how the team's performance is affected by target distribution (uniform or clumped), size of the team, and whether search with explicit localization is more beneficial than random search.


Multi-Fidelity Robotic Behaviors: Acting With Variable State Information

AAAI Conferences

Our work is driven by one of the core purposes of artificial intelligence: to develop real robotic agents that achieve complex high-level goals in real-time environments. Robotic behaviors select actions as a function of the state of the robot and of the world. Designing robust and appropriate robotic behaviors is a difficult because of noise, uncertainty and the cost of acquiring the necessary state information. We addressed this challenge within the concrete domain of robotic soccer with fully autonomous legged robots provided by Sony. In this paper, we present one of the outcomes of this research: the introduction of multi-fidelity behaviors to explicitly adapt to different levels of state information accuracy. The paper motivates and introduces our general approach and then reports on our concrete work with the Sony robots. The multi-fidelity behaviors we developed allow the robots to successfully achieve their goals in a dynamic and adversarial environment. A robot acts according to a set of behaviors that aggressively balance the cost of acquiring state information with the value of that information to the robot's ability to achieve its high-level goals. The paper includes empirical experiments which support our method of balancing the cost and benefit of the incrementally-accurate state information.