Goto

Collaborating Authors

 Government


Encoding Geometric Invariances in Higher-Order Neural Networks

Neural Information Processing Systems

ENCODING GEOMETRIC INVARIANCES IN HIGHER-ORDER NEURAL NETWORKS C.L. Giles Air Force Office of Scientific Research, Bolling AFB, DC 20332 R.D. Griffin Naval Research Laboratory, Washington, DC 20375-5000 T. Maxwell Sachs-Freeman Associates, Landover, MD 20785 ABSTRACT We describe a method of constructing higher-order neural networks that respond invariantly under geometric transformations on the input space. By requiring each unit to satisfy a set of constraints on the interconnection weights, a particular structure is imposed on the network. A network built using such an architecture maintains its invariant performance independent of the values the weights assume, of the learning rules used, and of the form of the nonlinearities in the network. The invariance exhibited by a firstorder network is usually of a trivial sort, e.g., responding only to the average input in the case of translation invariance, whereas higher-order networks can perform useful functions and still exhibit the invariance. We derive the weight constraints for translation, rotation, scale, and several combinations of these transformations, and report results of simulation studies.


Foundations and Grand Challenges of Artificial Intelligence: AAAI Presidential Address

AI Magazine

AAAI is a society devoted to supporting the progress in science, technology and applications of AI. I thought I would use this occasion to share with you some of my thoughts on the recent advances in AI, the insights and theoretical foundations that have emerged out of the past thirty years of stable, sustained, systematic explorations in our field, and the grand challenges motivating the research in our field.


What AI Can Do for Battle Management: A Report of the First AAAI Workshop on AI Applications to Battle Management

AI Magazine

The following is a synopsis of the findings of the first AAAI Workshop on AI Applications to Battle Management held at the University of Washington, 16 July 1987. This paper served as a focus for the workshop presentations and discussions and was augmented by the workshop presentations; it can also serve as a roadmap of topics for future workshops. AI can provide battle management with such capabilities as sensor data fusion and adaptive simulations. Also, several key needs in battle management will be AI research topics for years to come, such as understanding free text and inferencing in real time.


Intelligent Computer-Aided Engineering

AI Magazine

The goal of intelligent computer-aided engineering (ICAE) is to construct computer programs that capture a significant fraction of an engineer's knowledge. Today, ICAE systems are a goal, not a reality. This article attempts to refine that goal and suggest how to get there. We begin by examining several scenarios of what ICAE systems could be like. Next we describe why ICAE won't evolve directly from current applications of expert system technology to engineering problems. I focus on qualitative physics as a critical area where progress is needed, both in terms of representations and styles of reasoning.


What AI Can Do for Battle Management: A Report of the First AAAI Workshop on AI Applications to Battle Management

AI Magazine

The following is a synopsis of the findings of the first AAAI Workshop on AI Applications to Battle Management held at the University of Washington, 16 July 1987. The workshop organizer, Pete Bonasso, sent a point paper to a number of invited presenters giving his opinion of what AI could and could not do for battle management. This paper served as a focus for the workshop presentations and discussions and was augmented by the workshop presentations; it can also serve as a roadmap of topics for future workshops. AI can provide battle management with such capabilities as sensor data fusion and adaptive simulations. Also, several key needs in battle management will be AI research topics for years to come, such as understanding free text and inferencing in real time. Finally, there are several areas -- cooperating systems and terrain reasoning, for example -- where, given some impetus, AI might be able to provide help in the near future.


Navigation and Mapping in Large Scale Space

AI Magazine

In a large-scale space, structure is at a significantly larger scale than the observations available at an instant. To learn the structure of a large-scale space from observations, the observer must build a cognitive map of the environment by integrating observations over an extended period of time, inferring spatial structure from perceptions and the effects of actions. The cognitive map representation of large-scale space must account for a mapping, or learning structure from observations, and navigation, or creating and executing a plan to travel from one place to another. Approaches to date tend to be fragile either because they don't build maps; or because they assume nonlocal observations, such as those available in preexisting maps or global coordinate systems, including active landmark beacons and geo-locating satellites. We propose that robust navigation and mapping systems for large-scale space can be developed by adhering to a natural, four-level semantic hierarchy of descriptions for representation, planning, and execution of plans in large-scale space. The four levels are sensorimotor interaction, procedural behaviors, topological mapping, and metric mapping. Effective systems represent the environment, relative to sensors, at all four levels and formulate robust system behavior by moving flexibly between representational levels at run time. We demonstrate our claims in three implemented models: Tour, the Qualnav system simulator, and the NX robot.


Sensor Fusion in Certainty Grids for Mobile Robots

AI Magazine

A numeric representation of uncertain and incomplete sensor knowledge called certainty grids was used successfully in several recent mobile robot control programs developed at the Carnegie-Mellon University Mobile Robot Laboratory (MRL). Certainty grids have proven to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. MRL had good early success with ad hoc formulas for updating grid cells with new information. A new Bayesian statistical foundation for the operations promises further improvement. MRL proposes to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves. The certainty grid representation will allow this map to be incrementally updated in a uniform way based on information coming from various sources, including sonar, stereo vision, proximity, and contact sensors. The approach can correctly model the fuzziness of each reading and, at the same time, combine multiple measurements to produce sharper map features; it can also deal correctly with uncertainties in the robot's motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well-known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the time dimension and used to detect and track moving objects. Even the simplest versions of the idea allow us to fairly straightforwardly program the robot for tasks that have hitherto been out of reach. MRL looks forward to a program that can explore a region and return to its starting place, using map "snapshots" from its outbound journey to find its way back, even in the presence of disturbances of its motion and occasional changes in the terrain.


Evidence Accumulation and Flow of Control in a Hierarchical Spatial Reasoning System

AI Magazine

A fundamental goal of computer vision is the development of systems capable of carrying out scene interpretation while taking into account all the available knowledge. In this article, we focus on how the interpretation task can be aided by the expected scene information (such as map knowledge), which, in most cases, would not be in registration with the perceived scene. The proposed approach is applicable to the interpretation of scenes with three-dimensional structures as long as it is possible to generate the equivalent two-dimensional orthogonal or perspective projections of the structures in the expected scene. The system is implemented as a two-panel, six-level blackboard and uses the Dempster-Shafer formalism to accomplish inexact reasoning in a hierarchical space. Inexact reasoning involves exploiting, at different levels of abstraction, any internal geometric consistencies in the data and between the data and the expected scene. As they are discovered, these consistencies are used to update the system's belief in associating a data element with a particular entity from the expected scene.


A Framework for Representing and Reasoning about Three-Dimensional Objects for Visione

AI Magazine

The capabilities for representing and reasoning about three-dimensional (3-D) objects are essential for knowledge-based, 3-D photointerpretation systems that combine domain knowledge with image processing, as demonstrated by 3- D Mosaic and ACRONYM. Three-dimensional representation of objects is necessary for many additional applications, such as robot navigation and 3-D change detection. Geometric reasoning is especially important because geometric relationships between object parts are a rich source of domain knowledge. A practical framework for geometric representation and reasoning must incorporate projections between a two-dimensional (2-D) image and a 3-D scene, shape and surface properties of objects, and geometric and topological relationships between objects. In addition, it should allow easy modification and extension of the system's domain knowledge and be flexible enough to organize its reasoning efficiently to take advantage of the current available knowledge. We are developing such a framework -- the Frame-based Object Recognition and Modeling (3-D FORM) System. This system uses frames to represent objects such as buildings and walls, geometric features such as lines and planes, and geometric relationships such as parallel lines. Active procedures attached to the frames dynamically compute values as needed. Because the order of processing is controlled largely by the order of slot access, the system performs both top-down and bottom-up reasoning, depending on the current available knowledge. The FORM system is being implemented with the Carnegie-Mellon University-built Framekit tool in Common Lisp (Carbonell and Joseph 1986). To date, it has been applied to two types of geometric reasoning problems: interpreting 3-D wire frame data and solving sets of geometric constraints.


DARPA Santa Cruz Workshop on Planning

AI Magazine

This is a summary of the Workshop on Planning that was sponsored by the Defense Advanced Research Project Agency and held in Santa Cruz, California, on October 21-23, 1987. The purpose of this workshop was to identify and explore new directions for research in planning.