Goto

Collaborating Authors

 Neural Information Processing Systems


Real-Time Computer Vision and Robotics Using Analog VLSI Circuits

Neural Information Processing Systems

The long-term goal of our laboratory is the development of analog resistive network-based VLSI implementations of early and intermediate vision algorithms. We demonstrate an experimental circuit for smoothing and segmenting noisy and sparse depth data using the resistive fuse and a 1-D edge-detection circuit for computing zero-crossings using two resistive grids with different spaceconstants. To demonstrate the robustness of our algorithms and of the fabricated analog CMOS VLSI chips, we are mounting these circuits onto small mobile vehicles operating in a real-time, laboratory environment.


Operational Fault Tolerance of CMAC Networks

Neural Information Processing Systems

The performance sensitivity of Albus' CMAC network was studied for the scenario in which faults are introduced into the adjustable weights after training has been accomplished. It was found that fault sensitivity was reduced with increased generalization when "loss of weight" faults were considered, but sensitivity was increased for "saturated weight" faults. 1 INTRODUCTION Fault-tolerance is often cited as an inherent property of neural networks, and is thought by many to be a natural consequence of "massively parallel" computational architectures. Numerous anecdotal reports of fault-tolerance experiments, primarily in pattern classification tasks, abound in the literature. However, there has been surprisingly little rigorous investigation of the fault-tolerance properties of various network architectures in other application areas. In this paper we investigate the fault-tolerance of the CMAC (Cerebellar Model Arithmetic Computer) network [Albus 1975] in a systematic manner. CMAC networks have attracted much recent attention because of their successful application in robotic manipulator control [Ersu 1984, Miller 1986, Lane 1988].


Dataflow Architectures: Flexible Platforms for Neural Network Simulation

Neural Information Processing Systems

Dataflow architectures are general computation engines optimized for the execution of fme-grain parallel algorithms. Neural networks can be simulated on these systems with certain advantages. In this paper, we review dataflow architectures, examine neural network simulation performance on a new generation dataflow machine, compare that performance to other simulation alternatives, and discuss the benefits and drawbacks of the dataflow approach.



Mechanisms for Neuromodulation of Biological Neural Networks

Neural Information Processing Systems

The pyloric Central Pattern Generator of the crustacean stomatogastric ganglion is a well-defined biological neural network. This 14-neuron network is modulated by many inputs. These inputs reconfigure the network to produce multiple output patterns by three simple mechanisms: 1) detennining which cells are active; 2) modulating the synaptic efficacy; 3) changing the intrinsic response properties of individual neurons. The importance of modifiable intrinsic response properties of neurons for network function and modulation is discussed.


A Systematic Study of the Input/Output Properties of a 2 Compartment Model Neuron With Active Membranes

Neural Information Processing Systems

The input/output properties of a 2 compartment model neuron are systematically explored. Taken from the work of MacGregor (MacGregor, 1987), the model neuron compartments contain several active conductances, including a potassium conductance in the dendritic compartment driven by the accumulation of intradendritic calcium. Dynamics of the conductances and potentials are governed by a set of coupled first order differential equations which are integrated numerically. There are a set of 17 internal parameters to this model, specificying conductance rate constants, time constants, thresholds, etc. To study parameter sensitivity, a set of trials were run in which the input driving the neuron is kept fixed while each internal parameter is varied with all others left fixed. To study the input/output relation, the input to the dendrite (a square wave) was varied (in frequency and magnitude) while all internal parameters of the system were left flXed, and the resulting output firing rate and bursting rate was counted. The input/output relation of the model neuron studied turns out to be much more sensitive to modulation of certain dendritic potassium current parameters than to plasticity of synapse efficacy per se (the amount of current influx due to synapse activation). This would in turn suggest, as has been recently observed experimentally, that the potassium current may be as or more important a focus of neural plasticity than synaptic efficacy.


Performance Comparisons Between Backpropagation Networks and Classification Trees on Three Real-World Applications

Neural Information Processing Systems

In this paper we compare regression and classification systems. A regression system can generate an output f for an input X, where both X and f are continuous and, perhaps, multidimensional. A classification system can generate an output class, C, for an input X, where X is continuous and multidimensional and C is a member of a finite alphabet. The statistical technique of Classification And Regression Trees (CART) was developed during the years 1973 (Meisel and Michalpoulos) through 1984 (Breiman el al).


An Analog VLSI Model of Adaptation in the Vestibulo-Ocular Reflex

Neural Information Processing Systems

The vestibulo-ocular reflex (VOR) is the primary mechanism that controls the compensatory eye movements that stabilize retinal images duringrapid head motion. The primary pathways of this system are feed-forward, with inputs from the semicircular canals and outputs to the oculomotor system. Since visual feedback is not used directly in the VOR computation, the system must exploit motor learning to perform correctly. Lisberger(1988) has proposed a model for adapting the VOR gain using image-slip information from the retina. We have designed and tested analog very largescale integrated(VLSI) circuitry that implements a simplified version of Lisberger's adaptive VOR model.


Computational Efficiency: A Common Organizing Principle for Parallel Computer Maps and Brain Maps?

Neural Information Processing Systems

It is well-known that neural responses in particular brain regions are spatially organized, but no general principles have been developed thatrelate the structure of a brain map to the nature of the associated computation. On parallel computers, maps of a sort quite similar to brain maps arise when a computation is distributed across multiple processors. In this paper we will discuss the relationship betweenmaps and computations on these computers and suggest how similar considerations might also apply to maps in the brain. 1 INTRODUCTION A great deal of effort in experimental and theoretical neuroscience is devoted to recording and interpreting spatial patterns of neural activity. A variety of map patterns have been observed in different brain regions and, presumably, these patterns reflectsomething about the nature of the neural computations being carried out in these regions. To date, however, there have been no general principles for interpreting the structure of a brain map in terms of properties of the associated computation. In the field of parallel computing, analogous maps arise when a computation isdistributed across multiple processors and, in this case, the relationship Computational Eftkiency 61 between maps and computations is better understood. In this paper, we will attempt torelate some of the mapping principles from the field of parallel computing to the organization of brain maps.


Meiosis Networks

Neural Information Processing Systems

A central problem in connectionist modelling is the control of network and architectural resources during learning. In the present approach, weights reflect a coarse prediction history as coded by a distribution of values and parameterized in the mean and standard deviation of these weight distributions. Weight updates are a function of both the mean and standard deviation of each connection in the network and vary as a function of the error signal ("stochastic delta rule"; Hanson, 1990). Consequently, the weights maintain information on their central tendency and their "uncertainty" in prediction. Such information is useful in establishing a policy concerning the size of the nodal complexity of the network and growth of new nodes. For example, during problem solving the present network can undergo "meiosis", producing two nodes where there was one "overtaxed" node as measured by its coefficient of variation. It is shown in a number of benchmark problems that meiosis networks can find minimal architectures, reduce computational complexity, and overall increase the efficiency of the representation learning interaction.