Plotting

 Country


New Hardware for Massive Neural Networks

Neural Information Processing Systems

ABSTRACT Transient phenomena associated with forward biased silicon p - n - n structures at 4.2K show remarkable similarities with biological neurons. The devices play a role similar to the two-terminal switching elements in Hodgkin-Huxley equivalent circuit diagrams. The devices provide simpler and more realistic neuron emulation than transistors or op-amps. They have such low power and current requirements that they could be used in massive neural networks. Some observed properties of simple circuits containing the devices include action potentials, refractory periods, threshold behavior, excitation, inhibition, summation over synaptic inputs, synaptic weights, temporal integration, memory, network connectivity modification based on experience, pacemaker activity, firing thresholds, coupling to sensors with graded signal outputsand the dependence of firing rate on input current.


A Computer Simulation of Olfactory Cortex with Functional Implications for Storage and Retrieval of Olfactory Information

Neural Information Processing Systems

Using a simple Hebb-type learning rule in conjunction withthe cortical dynamics which emerge from the anatomical and physiological organization ofthe model, the simulations are capable of establishing cortical representations for different input patterns. The basis of these representations lies in the interaction of sparsely distributed, highly divergent/convergent interconnections between modeled neurons. We have shown that different representations can be stored with minimal interference.


Learning a Color Algorithm from Examples

Neural Information Processing Systems

The algorithm, which resembles anew lightness algorithm recently proposed by Land, is approximately equivalent to filtering the image through a center-surround receptive field in individual chromatic channels.The synthesizing technique, optimal linear estimation, requires only one assumption, that the operator that transforms input into output is linear. This assumption is true for a certain class of early vision algorithms that may therefore be synthesized in a similar way from examples. Other methods of synthesizing algorithms from examples, or "learning", such as backpropagation, do not yield a significantly different orbetter lightness algorithm in the Mondrian world. The linear estimation and backpropagation techniques both produce simultaneous brightness contrast effects. The problems that a visual system must solve in decoding two-dimensional images into three-dimensional scenes (inverse optics problems) are difficult: the information supplied by an image is not sufficient by itself to specify a unique scene. To reduce the number of possible interpretations of images, visual systems, whether artificial or biological, must make use of natural constraints, assumptions about the physical properties of surfaces and lights. Computational vision scientists have derived effective solutions for some inverse optics problems (such as computing depth from binocular disparity) by determining the appropriate natural constraints and embedding them in algorithms. How might a visual system discover and exploit natural constraints on its own? We address a simpler question: Given only a set of examples of input images and desired output solutions, can a visual system synthesize.


Cycles: A Simulation Tool for Studying Cyclic Neural Networks

Neural Information Processing Systems

Thecomputer program, implemented on the Texas Instruments Explorer / Odyssey system, and the results of numerous experiments are discussed. The program, CYCLES, allows a user to construct, operate, and inspect neural networks containing cyclic connection paths with the aid of a powerful graphicsbased interface.Numerous cycles have been studied, including cycles with one or more activation points, non-interruptible cycles, cycles with variable path lengths, and interacting cycles. The final class, interacting cycles, is important due to its ability to implement time-dependent goal processing in neural networks. INTRODUCTION Neural networks are capable of many types of computation. However, the majority of researchers are currently limiting their studies to various forms of mapping systems; such as content addressable memories, expert system engines, and artificial retinas.


Self-Organization of Associative Database and Its Applications

Neural Information Processing Systems

Here, X is a finite or infinite set, and Y is another finite or infinite set. A learning machine observes any set of pairs (x, y) sampled randomly from X x Y. (X x Y means the Cartesian product of X and Y.) And, it computes some estimate j:


Experimental Demonstrations of Optical Neural Computers

Neural Information Processing Systems

In the first a closed optical feedback loop is used to implement auto-associative image recall. In the second a perceptron-Iike learning algorithm is implemented with photorefractive holography.


Stability Results for Neural Networks

Neural Information Processing Systems

Department of Electrical and Computer Engineering University of Notre Dame Notre Dame, IN 46556 ABSTRACT In the present paper we survey and utilize results from the qualitative theory of large scale interconnected dynamical systems in order to develop a qualitative theory for the Hopfield model of neural networks. In our approach we view such networks as an interconnection ofmany single neurons. Our results are phrased in terms of the qualitative properties of the individual neurons and in terms of the properties of the interconnecting structure of the neural networks. Aspects of neural networks which we address include asymptotic stability, exponential stability, and instability of an equilibrium; estimates of trajectory bounds; estimates of the domain of attraction of an asymptotically stable equilibrium; and stability of neural networks under structural perturbations. INTRODUCTION In recent years, neural networks have attracted considerable attention as candidates for novel computational systemsl-3 .


Network Generality, Training Required, and Precision Required

Neural Information Processing Systems

We show how to estimate (1) the number of functions that can be implemented by a particular network architecture, (2) how much analog precision is needed in the connections inthe network, and (3) the number of training examples the network must see before it can be expected to form reliable generalizations.


Using Neural Networks to Improve Cochlear Implant Speech Perception

Neural Information Processing Systems

Mter the implant, sound can be detected through the electrical stimulation of the remaining peripheral auditory nervous system. Although great progress has been achieved in this area, no useful speech recognition has been attained with either single or multiple channel cochlear implants. Coding evidence suggests that it is necessary for any implant which would effectively couple with the natural speech perception system to simulate thetemporal dispersion and other phenomena found in the natural receptors, and currently not implemented in any cochlear implants. To this end, it is presented here a computational model using artificial neural networks (ANN)to incorporate the natural phenomena in the artificial cochlear. The ANN model presents a series of advantages to the implementation of such systems.