Goto

Collaborating Authors

 Country



How Neural Nets Work

Neural Information Processing Systems

How Neural Nets Work Alan Lapedes Robert Farber Theoretical Division Los Alamos National Laboratory Los Alamos, NM 87545 Abstract: There is presently great interest in the abilities of neural networks to mimic "qualitative reasoning" by manipulating neural incodings of symbols. Less work has been performed on using neural networks to process floating point numbers and it is sometimes stated that neural networks are somehow inherently inaccurate and therefore best suited for "fuzzy" qualitative reasoning. Nevertheless, the potential speed of massively parallel operations make neural net "number crunching" an interesting topic to explore. In this paper we discuss some of our work in which we demonstrate that for certain applications neural networks can achieve significantly higher numerical accuracy than more conventional techniques. In particular, prediction of future values of a chaotic time series can be performed with exceptionally high accuracy. We analyze how a neural net is able to do this, and in the process show that a large class of functions from Rn. Rffl may be accurately approximated by a backpropagation neural net with just two "hidden" layers. The network uses this functional approximation to perform either interpolation (signal processing applications) or extrapolation (symbol processing applicationsJ.


Invariant Object Recognition Using a Distributed Associative Memory

Neural Information Processing Systems

This paper describes an approach to 2-dimensional object recognition. Complex-log conformal mapping is combined with a distributed associative memory to create a system which recognizes objects regardless of changes in rotation or scale. Recalled information from the memorized database is used to classify an object, reconstruct the memorized version of the object, and estimate the magnitude of changes in scale or rotation. The system response is resistant to moderate amounts of noise and occlusion. Several experiments, using real, gray scale images, are presented to show the feasibility of our approach. Introduction The challenge of the visual recognition problem stems from the fact that the projection of an object onto an image can be confounded by several dimensions of variability such as uncertain perspective, changing orientation and scale, sensor noise, occlusion, and nonuniform illumination.


Time-Sequential Self-Organization of Hierarchical Neural Networks

Neural Information Processing Systems

TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS Ronald H. Silverman Cornell University Medical College, New York, NY 10021 Andrew S. Noetzel polytechnic University, Brooklyn, NY 11201 ABSTRACT Self-organization of multi-layered networks can be realized by time-sequential organization of successive neural layers. Lateral inhibition operating in the surround of firing cells in each layer provides for unsupervised capture of excitation patterns presented by the previous layer. By presenting patterns of increasing complexity, in coordination with network selforganization, higher levels of the hierarchy capture concepts implicit in the pattern set. INTRODUCTION A fundamental difficulty in self-organization of hierarchical, multi-layered, networks of simple neuron-like cells is the determination of the direction of adjustment of synaptic link weights between neural layers not directly connected to input or output patterns. Several different approaches have been used to address this problem.


Encoding Geometric Invariances in Higher-Order Neural Networks

Neural Information Processing Systems

ENCODING GEOMETRIC INVARIANCES IN HIGHER-ORDER NEURAL NETWORKS C.L. Giles Air Force Office of Scientific Research, Bolling AFB, DC 20332 R.D. Griffin Naval Research Laboratory, Washington, DC 20375-5000 T. Maxwell Sachs-Freeman Associates, Landover, MD 20785 ABSTRACT We describe a method of constructing higher-order neural networks that respond invariantly under geometric transformations on the input space. By requiring each unit to satisfy a set of constraints on the interconnection weights, a particular structure is imposed on the network. A network built using such an architecture maintains its invariant performance independent of the values the weights assume, of the learning rules used, and of the form of the nonlinearities in the network. The invariance exhibited by a firstorder network is usually of a trivial sort, e.g., responding only to the average input in the case of translation invariance, whereas higher-order networks can perform useful functions and still exhibit the invariance. We derive the weight constraints for translation, rotation, scale, and several combinations of these transformations, and report results of simulation studies.


Neuromorphic Networks Based on Sparse Optical Orthogonal Codes

Neural Information Processing Systems

Synthetic neural nets[1,2] represent an active and growing research field. Fundamental issues, as well as practical implementations with electronic and optical devices are being studied. In addition, several learning algorithms have been studied, for example stochastically adaptive systems[3] based on many-body physics optimization concepts[4,5]. Signal processing in the optical domain has also been an active field of research. A wide variety of nonlinear all-optical devices are being studied, directed towards applications both in optical computating and in optical switching.


REFLEXIVE ASSOCIATIVE MEMORIES

Neural Information Processing Systems

REFLEXIVE ASSOCIATIVE MEMORIES Hendrlcus G. Loos Laguna Research Laboratory, Fallbrook, CA 92028-9765 ABSTRACT In the synchronous discrete model, the average memory capacity of bidirectional associative memories (BAMs) is compared with that of Hopfield memories, by means of a calculat10n of the percentage of good recall for 100 random BAMs of dimension 64x64, for different numbers of stored vectors. The memory capac1ty Is found to be much smal1er than the Kosko upper bound, which Is the lesser of the two dimensions of the BAM. On the average, a 64x64 BAM has about 68 % of the capacity of the corresponding Hopfield memory with the same number of neurons. The memory capacity limitations are due to spurious stable states, which arise In BAMs In much the same way as in Hopfleld memories. Occurrence of spurious stable states can be avoided by replacing the thresholding in the backlayer of the BAM by another nonl1near process, here called "Dominant Label Selection" (DLS).



MURPHY: A Robot that Learns by Doing

Neural Information Processing Systems

Current Focus Of Learning Research Most connectionist learning algorithms may be grouped into three general catagories, commonly referred to as supenJised, unsupenJised, and reinforcement learning. Supervised learning requires the explicit participation of an intelligent teacher, usually to provide the learning system with task-relevant input-output pairs (for two recent examples, see [1,2]). Unsupervised learning, exemplified by "clustering" algorithms, are generally concerned with detecting structure in a stream of input patterns [3,4,5,6,7]. In its final state, an unsupervised learning system will typically represent the discovered structure as a set of categories representing regions of the input space, or, more generally, as a mapping from the input space into a space of lower dimension that is somehow better suited to the task at hand. In reinforcement learning, a "critic" rewards or penalizes the learning system, until the system ultimately produces the correct output in response to a given input pattern [8]. It has seemed an inevitable tradeoff that systems needing to rapidly learn specific, behaviorally useful input-output mappings must necessarily do so under the auspices of an intelligent teacher with a ready supply of task-relevant training examples. This state of affairs has seemed somewhat paradoxical, since the processes of Rerceptual and cognitive development in human infants, for example, do not depend on the moment by moment intervention of a teacher of any sort. Learning by Doing The current work has been focused on a fourth type of learning algorithm, i.e. learning-bydoing, an approach that has been very little studied from either a connectionist perspective


Using Neural Networks to Improve Cochlear Implant Speech Perception

Neural Information Processing Systems

An increasing number of profoundly deaf patients suffering from sensorineural deafness are using cochlear implants as prostheses. Mter the implant, sound can be detected through the electrical stimulation of the remaining peripheral auditory nervous system. Although great progress has been achieved in this area, no useful speech recognition has been attained with either single or multiple channel cochlear implants. Coding evidence suggests that it is necessary for any implant which would effectively couple with the natural speech perception system to simulate the temporal dispersion and other phenomena found in the natural receptors, and currently not implemented in any cochlear implants. To this end, it is presented here a computational model using artificial neural networks (ANN) to incorporate the natural phenomena in the artificial cochlear.