Goto

Collaborating Authors

 Country


Learning a Color Algorithm from Examples

Neural Information Processing Systems

The algorithm, which resembles anew lightness algorithm recently proposed by Land, is approximately equivalent to filtering the image through a center-surround receptive field in individual chromatic channels.The synthesizing technique, optimal linear estimation, requires only one assumption, that the operator that transforms input into output is linear. This assumption is true for a certain class of early vision algorithms that may therefore be synthesized in a similar way from examples. Other methods of synthesizing algorithms from examples, or "learning", such as backpropagation, do not yield a significantly different orbetter lightness algorithm in the Mondrian world. The linear estimation and backpropagation techniques both produce simultaneous brightness contrast effects. The problems that a visual system must solve in decoding two-dimensional images into three-dimensional scenes (inverse optics problems) are difficult: the information supplied by an image is not sufficient by itself to specify a unique scene. To reduce the number of possible interpretations of images, visual systems, whether artificial or biological, must make use of natural constraints, assumptions about the physical properties of surfaces and lights. Computational vision scientists have derived effective solutions for some inverse optics problems (such as computing depth from binocular disparity) by determining the appropriate natural constraints and embedding them in algorithms. How might a visual system discover and exploit natural constraints on its own? We address a simpler question: Given only a set of examples of input images and desired output solutions, can a visual system synthesize.


Cycles: A Simulation Tool for Studying Cyclic Neural Networks

Neural Information Processing Systems

Thecomputer program, implemented on the Texas Instruments Explorer / Odyssey system, and the results of numerous experiments are discussed. The program, CYCLES, allows a user to construct, operate, and inspect neural networks containing cyclic connection paths with the aid of a powerful graphicsbased interface.Numerous cycles have been studied, including cycles with one or more activation points, non-interruptible cycles, cycles with variable path lengths, and interacting cycles. The final class, interacting cycles, is important due to its ability to implement time-dependent goal processing in neural networks. INTRODUCTION Neural networks are capable of many types of computation. However, the majority of researchers are currently limiting their studies to various forms of mapping systems; such as content addressable memories, expert system engines, and artificial retinas.


Self-Organization of Associative Database and Its Applications

Neural Information Processing Systems

Here, X is a finite or infinite set, and Y is another finite or infinite set. A learning machine observes any set of pairs (x, y) sampled randomly from X x Y. (X x Y means the Cartesian product of X and Y.) And, it computes some estimate j:


Experimental Demonstrations of Optical Neural Computers

Neural Information Processing Systems

In the first a closed optical feedback loop is used to implement auto-associative image recall. In the second a perceptron-Iike learning algorithm is implemented with photorefractive holography.


Stability Results for Neural Networks

Neural Information Processing Systems

Department of Electrical and Computer Engineering University of Notre Dame Notre Dame, IN 46556 ABSTRACT In the present paper we survey and utilize results from the qualitative theory of large scale interconnected dynamical systems in order to develop a qualitative theory for the Hopfield model of neural networks. In our approach we view such networks as an interconnection ofmany single neurons. Our results are phrased in terms of the qualitative properties of the individual neurons and in terms of the properties of the interconnecting structure of the neural networks. Aspects of neural networks which we address include asymptotic stability, exponential stability, and instability of an equilibrium; estimates of trajectory bounds; estimates of the domain of attraction of an asymptotically stable equilibrium; and stability of neural networks under structural perturbations. INTRODUCTION In recent years, neural networks have attracted considerable attention as candidates for novel computational systemsl-3 .


Network Generality, Training Required, and Precision Required

Neural Information Processing Systems

We show how to estimate (1) the number of functions that can be implemented by a particular network architecture, (2) how much analog precision is needed in the connections inthe network, and (3) the number of training examples the network must see before it can be expected to form reliable generalizations.


Using Neural Networks to Improve Cochlear Implant Speech Perception

Neural Information Processing Systems

Mter the implant, sound can be detected through the electrical stimulation of the remaining peripheral auditory nervous system. Although great progress has been achieved in this area, no useful speech recognition has been attained with either single or multiple channel cochlear implants. Coding evidence suggests that it is necessary for any implant which would effectively couple with the natural speech perception system to simulate thetemporal dispersion and other phenomena found in the natural receptors, and currently not implemented in any cochlear implants. To this end, it is presented here a computational model using artificial neural networks (ANN)to incorporate the natural phenomena in the artificial cochlear. The ANN model presents a series of advantages to the implementation of such systems.



Temporal Patterns of Activity in Neural Networks

Neural Information Processing Systems

Paolo Gaudiano Dept. of Aerospace Engineering Sciences, University of Colorado, Boulder CO 80309, USA January 5, 1988 Abstract Patterns of activity over real neural structures are known to exhibit timedependent behavior.It would seem that the brain may be capable of utilizing temporal behavior of activity in neural networks as a way of performing functions which cannot otherwise be easily implemented. These might include the origination of sequential behavior and the recognition of time-dependent stimuli. A model is presented here which uses neuronal populations with recurrent feedback connections inan attempt to observe and describe the resulting time-dependent behavior. Shortcomings and problems inherent to this model are discussed. Current models by other researchers are reviewed and their similarities and differences discussed.


Hierarchical Learning Control - An Approach with Neuron-Like Associative Memories

Neural Information Processing Systems

In this paper research of the second line is described: Starting from a neurophysiologically inspired model of stimulusresponse (SR)and/or associative memorization and a psychologically motivated ministructure for basic control tasks, preconditions and conditions are studied for cooperation of such units in a hierarchical organisation, as can be assumed to be the general layout of macrostructures in the brain. I. INTRODUCTION Theoretic modelling in brain theory is a highly speculative subject. However, it is necessary since it seems very unlikely to get a clear picture of this very complicated device by just analyzing theavailable measurements on sound and/or damaged brain parts only. As in general physics, one has to realize, that there are different levels of modelling: in physics stretching from the atomary levelover atom assemblies till up to general behavioural models like kinematics and mechanics, in brain theory stretching from chemical reactions over electrical spikes and neuronal cell assembly cooperation till general human behaviour. The research discussed in this paper is located just above the direct study of synaptic cooperation of neuronal cell assemblies as studied e. g. in /Amari 1988/. It takes into account the changes of synaptic weighting, without simulating the physical details of such changes, and makes use of a general imitation of learning situation (stimuli) - response connections for building up trainable basic control loops, which allow dynamic SR memorization and which are themsel ves elements of some more complex behavioural loops.