Goto

Collaborating Authors

 Country


e-Entropy and the Complexity of Feedforward Neural Networks

Neural Information Processing Systems

We are concerned with the problem of the number of nodes needed in a feedforward neural network in order to represent a fUllction to within a specified accuracy.



Signal Processing by Multiplexing and Demultiplexing in Neurons

Neural Information Processing Systems

The signal content of the codes encoded by a presynaptic neuron will be decoded by some other neurons postsynpatically. Neurons are often thought to be encoding a single type of 282 Signal Processing by Multiplexing and Demultiplexing in Neurons 283 codes. But there is evidence suggesting that neurons may encode more than one type of signals. One of the mechanisms for embedding multiple types of signals processed by a neuron is multiplexing. When the signals are multiplexed, they also need to be demultiplexed to extract the useful information transmitted by the neurons. Theoretical and experimental evidence of such multiplexing and demultiplexing scheme for signal processing by neurons will be given below.


Applications of Neural Networks in Video Signal Processing

Neural Information Processing Systems

Although color TV is an established technology, there are a number of longstanding problems for which neural networks may be suited. Impulse noise is such a problem, and a modular neural network approach is presented inthis paper. The training and analysis was done on conventional computers, while real-time simulations were performed on a massively parallel computercalled the Princeton Engine. The network approach was compared to a conventional alternative, a median filter. Real-time simulations andquantitative analysis demonstrated the technical superiority of the neural system. Ongoing work is investigating the complexity and cost of implementing this system in hardware.


Lg Depth Estimation and Ripple Fire Characterization Using Artificial Neural Networks

Neural Information Processing Systems

This srudy has demonstrated how artificial neural networks (ANNs) can be used to characterize seismic sources using high-frequency regional seismic data. We have taken the novel approach of using ANNs as a research tool for obtaining seismic source information, specifically depth of focus for earthquakes and ripple-fire characteristics for economic blasts, rather than as just a feature classifier between earthquake and explosion populations. Overall, we have found that ANNs have potential applications to seismic event characterization and identification, beyond just as a feature classifier. In future studies, these techniques should be applied to actual data of regional seismic events recorded at the new regional seismic arrays. The results of this study indicates that an ANN should be evaluated as part of an operational seismic event identification system. 1 INTRODUCTION ANNs have usually been used as pattern matching algorithms, and recent studies have applied ANNs to standard classification between classes of earthquakes and explosions using wavefonn features (Dowla, et al, 1989), (Dysart and Pulli, 1990).


Leaning by Combining Memorization and Gradient Descent

Neural Information Processing Systems

We have created a radial basis function network that allocates a new computational unit whenever an unusual pattern is presented to the network. The network learns by allocating new units and adjusting the parameters of existing units. If the network performs poorly on a presented pattern, then a new unit is allocated which memorizes the response to the presented pattern. If the network performs well on a presented pattern, then the network parameters are updated using standard LMS gradient descent. For predicting the Mackey Glass chaotic time series, our network learns much faster than do those using back-propagation and uses a comparable number of synapses.


Spoken Letter Recognition

Neural Information Processing Systems

Through the use of neural network classifiers and careful feature selection, we have achieved high-accuracy speaker-independent spoken letter recognition. Forisolated letters, a broad-category segmentation is performed Location of segment boundaries allows us to measure features at specific locations in the signal such as vowel onset, where important information resides. Letter classification is performed with a feed-forward neural network. Recognitionaccuracy on a test set of 30 speakers was 96%. Neural network classifiers are also used for pitch tracking and broad-category segmentation of letter strings.


Designing Linear Threshold Based Neural Network Pattern Classifiers

Neural Information Processing Systems

Terrence L. Fine School of Electrical Engineering Cornell University Ithaca, NY 14853 Abstract The three problems that concern us are identifying a natural domain of pattern classification applications of feed forward neural networks, selecting anappropriate feedforward network architecture, and assessing the tradeoff between network complexity, training set size, and statistical reliability asmeasured by the probability of incorrect classification. We close with some suggestions, for improving the bounds that come from Vapnik Chervonenkis theory, that can narrow, but not close, the chasm between theory and practice. Neural networks are appropriate as pattern classifiers when the pattern sources are ones of which we have little understanding, beyond perhaps a nonparametric statistical model, but we have been provided with classified samples of features drawn from each of the pattern categories. Neural networks should be able to provide rapid and reliable computation of complex decision functions. The issue in doubt is their statistical response to new inputs.


Language Induction by Phase Transition in Dynamical Recognizers

Neural Information Processing Systems

A higher order recurrent neural network architecture learns to recognize and generate languages after being "trained" on categorized exemplars. Studying these networks from the perspective of dynamical systems yields two interesting discoveries: First, a longitudinal examination of the learning process illustrates a new form of mechanical inference: Induction by phase transition. A small weight adjustment causes a "bifurcation" in the limit behavior of the network.


A Method for the Efficient Design of Boltzmann Machines for Classiffication Problems

Neural Information Processing Systems

A Boltzmann machine ([AHS], [HS], [AK]) is a neural network model in which the units update their states according to a stochastic decision rule. It consists of a set U of units, a set C of unordered pairs of elements of U, and an assignment of connection strengths S: C -- R. A configuration of a Boltzmann machine is a map k: U -- {O, I}.