Goto

Collaborating Authors

 Griffin, R. D.


Encoding Geometric Invariances in Higher-Order Neural Networks

Neural Information Processing Systems

By requiring each unit to satisfy a set of constraints on the interconnection weights, a particular structure is imposed on the network. A network built using such an architecture maintains its invariant performance independent of the values the weights assume, of the learning rules used, and of the form of the nonlinearities in the network. The invariance exhibited by a firstorder networkis usually of a trivial sort, e.g., responding only to the average input in the case of translation invariance, whereas higher-order networks can perform useful functions and still exhibit the invariance. We derive the weight constraints for translation, rotation, scale, and several combinations of these transformations, and report results of simulation studies. INTRODUCTION A persistent difficulty for pattern recognition systems is the requirement that patterns or objects be recognized independent of irrelevant parameters or distortions such as orientation (position, rotation, aspect), scale or size, background or context, doppler shift, time of occurrence, or signal duration.


Encoding Geometric Invariances in Higher-Order Neural Networks

Neural Information Processing Systems

ENCODING GEOMETRIC INVARIANCES IN HIGHER-ORDER NEURAL NETWORKS C.L. Giles Air Force Office of Scientific Research, Bolling AFB, DC 20332 R.D. Griffin Naval Research Laboratory, Washington, DC 20375-5000 T. Maxwell Sachs-Freeman Associates, Landover, MD 20785 ABSTRACT We describe a method of constructing higher-order neural networks that respond invariantly under geometric transformations on the input space. By requiring each unit to satisfy a set of constraints on the interconnection weights, a particular structure is imposed on the network. A network built using such an architecture maintains its invariant performance independent of the values the weights assume, of the learning rules used, and of the form of the nonlinearities in the network. The invariance exhibited by a firstorder network is usually of a trivial sort, e.g., responding only to the average input in the case of translation invariance, whereas higher-order networks can perform useful functions and still exhibit the invariance. We derive the weight constraints for translation, rotation, scale, and several combinations of these transformations, and report results of simulation studies.