Plotting

 Technology


Performance Comparisons Between Backpropagation Networks and Classification Trees on Three Real-World Applications

Neural Information Processing Systems

In this paper we compare regression and classification systems. A regression system can generate an output f for an input X, where both X and f are continuous and, perhaps, multidimensional. A classification system can generate an output class, C, for an input X, where X is continuous and multidimensional and C is a member of a finite alphabet. The statistical technique of Classification And Regression Trees (CART) was developed during the years 1973 (Meisel and Michalpoulos) through 1984 (Breiman el al).


An Analog VLSI Model of Adaptation in the Vestibulo-Ocular Reflex

Neural Information Processing Systems

The vestibulo-ocular reflex (VOR) is the primary mechanism that controls the compensatory eye movements that stabilize retinal images duringrapid head motion. The primary pathways of this system are feed-forward, with inputs from the semicircular canals and outputs to the oculomotor system. Since visual feedback is not used directly in the VOR computation, the system must exploit motor learning to perform correctly. Lisberger(1988) has proposed a model for adapting the VOR gain using image-slip information from the retina. We have designed and tested analog very largescale integrated(VLSI) circuitry that implements a simplified version of Lisberger's adaptive VOR model.


Computational Efficiency: A Common Organizing Principle for Parallel Computer Maps and Brain Maps?

Neural Information Processing Systems

It is well-known that neural responses in particular brain regions are spatially organized, but no general principles have been developed thatrelate the structure of a brain map to the nature of the associated computation. On parallel computers, maps of a sort quite similar to brain maps arise when a computation is distributed across multiple processors. In this paper we will discuss the relationship betweenmaps and computations on these computers and suggest how similar considerations might also apply to maps in the brain. 1 INTRODUCTION A great deal of effort in experimental and theoretical neuroscience is devoted to recording and interpreting spatial patterns of neural activity. A variety of map patterns have been observed in different brain regions and, presumably, these patterns reflectsomething about the nature of the neural computations being carried out in these regions. To date, however, there have been no general principles for interpreting the structure of a brain map in terms of properties of the associated computation. In the field of parallel computing, analogous maps arise when a computation isdistributed across multiple processors and, in this case, the relationship Computational Eftkiency 61 between maps and computations is better understood. In this paper, we will attempt torelate some of the mapping principles from the field of parallel computing to the organization of brain maps.


Meiosis Networks

Neural Information Processing Systems

A central problem in connectionist modelling is the control of network and architectural resources during learning. In the present approach, weights reflect a coarse prediction history as coded by a distribution of values and parameterized in the mean and standard deviation of these weight distributions. Weight updates are a function of both the mean and standard deviation of each connection in the network and vary as a function of the error signal ("stochastic delta rule"; Hanson, 1990). Consequently, the weights maintain information on their central tendency and their "uncertainty" in prediction. Such information is useful in establishing a policy concerning the size of the nodal complexity of the network and growth of new nodes. For example, during problem solving the present network can undergo "meiosis", producing two nodes where there was one "overtaxed" node as measured by its coefficient of variation. It is shown in a number of benchmark problems that meiosis networks can find minimal architectures, reduce computational complexity, and overall increase the efficiency of the representation learning interaction.


A Neural Network for Real-Time Signal Processing

Neural Information Processing Systems

This paper describes a neural network algorithm that (1) performs temporal pattern matching in real-time, (2) is trained online, with a single pass, (3) requires only a single template for training of each representative class, (4) is continuously adaptable to changes in background noise, (5) deals with transient signals having low signalto-noise ratios,(6) works in the presence of non-Gaussian noise, (7) makes use of context dependencies and (8) outputs Bayesian probability estimates.The algorithm has been adapted to the problem of passive sonar signal detection and classification. It runs on a Connection Machineand correctly classifies, within 500 ms of onset, signals embedded in noise and subject to considerable uncertainty. 1 INTRODUCTION This paper describes a neural network algorithm, STOCHASM, that was developed for the purpose of real-time signal detection and classification. Of prime concern was capability for dealing with transient signals having low signal-to-noise ratios (SNR). The algorithm was first developed in 1986 for real-time fault detection and diagnosis of malfunctions in ship gas turbine propulsion systems (Malkoff, 1987).



Neural Network Visualization

Neural Information Processing Systems

We have developed graphics to visualize static and dynamic information inlayered neural network learning systems. Emphasis was placed on creating new visuals that make use of spatial arrangements, sizeinformation, animation and color. We applied these tools to the study of back-propagation learning of simple Boolean predicates, and have obtained new insights into the dynamics of the learning process.


Sequential Decision Problems and Neural Networks

Neural Information Processing Systems

Decision making tasks that involve delayed consequences are very common yet difficult to address with supervised learning methods. If there is an accurate model of the underlying dynamical system, then these tasks can be formulated as sequential decision problems and solved by Dynamic Programming. This paper discusses reinforcement learningin terms of the sequential decision framework and shows how a learning algorithm similar to the one implemented by the Adaptive Critic Element used in the pole-balancer of Barto, Sutton, and Anderson (1983), and further developed by Sutton (1984), fits into this framework. Adaptive neural networks can play significant roles as modules for approximating the functions required for solving sequential decision problems.


Bayesian Inference of Regular Grammar and Markov Source Models

Neural Information Processing Systems

In this paper we develop a Bayes criterion which includes the Rissanen complexity, for inferring regular grammar models. We develop two methods for regular grammar Bayesian inference. The fIrst method is based on treating the regular grammar as a I-dimensional Markov source, and the second is based on the combinatoric characteristics of the regular grammar itself. We apply the resulting Bayes criteria to a particular example in order to show the efficiency of each method.


Pulse-Firing Neural Chips for Hundreds of Neurons

Neural Information Processing Systems

U niv. of Edinburgh ABSTRACT We announce new CMOS synapse circuits using only three and four MOSFETsisynapse. Neural states are asynchronous pulse streams, upon which arithmetic is performed directly. Chips implementing over 100 fully programmable synapses are described and projections to networks of hundreds of neurons are made. 1 OVERVIEW OF PULSE FIRING NEURAL VLSI The inspiration for the use of pulse firing in silicon neural networks is clearly the electrical/chemical pulse mechanism in "real" biological neurons. Neurons fire voltage pulses of a frequency determined by their level of activity but of a constant magnitude (usually 5 Volts) [Murray,1989a]. As indicated in Figure 1, synapses perform arithmetic directly on these asynchronous pulses, to increment or decrement the receiving neuron's activity.