Benjamin J. Kuipers and Tad S. Levitt

AI Magazine 

In a large-scale space, structure is at a significantly larger scale than the observations available at an instant To learn the structure of a large-scale space from observations, the observer must build a cognitive map of the environment by integrating observations over an extended period of time, inferring spatial structure from perceptions and the effects of actions The cognitive map representation of largescale space must account for a mapping, or learning structure from observations, and navigation, or creating and executing a plan to travel from one place to another Approaches to date tend to be fragile either because they don't build maps; or because they assume nonlocal observations, such as those available in preexisting maps or global coordinate systems, including active Thus, to learn the large-scale structure of the space, the traveler must necessarily build a cognitive map of the environment by integrating observations over extended periods of time, inferring spatial structure from perceptions and the effects of actions. Large-scale space and the corresponding cognitive map representation cannot be defined independent of sensory perceptions or motor actions used to observe and move about in this environment For example, a work bench observed by a laser-bearing robot is not a large-scale space, but the moon is a large-scale space relative to a land-roving robot. A microchip is not large scale relative to an optical inspection system, but a grasshopper ganglion is a large-scale space when observed by an electron microscope. Inverse trigonometric operations and scalar multiplication require ratio data, in which a numeric value is calibrated with respect to a true zero. Trigonometric operations can require only interval data on angles, where differences are well defined, but absolute angles are not required.