Not enough data to create a plot.
Try a different view from the menu above.
Convergence of a Neural Network Classifier
Baras, John S., LaVigna, Anthony
In this paper, we prove that the vectors in the LVQ learning algorithm converge. We do this by showing that the learning algorithm performs stochastic approximation. Convergence is then obtained by identifying the appropriate conditions on the learning rate and on the underlying statistics of the classification problem. We also present a modification to the learning algorithm which we argue results in convergence of the LVQ error to the Bayesian optimal error as the appropriate parameters become large.
Discrete Affine Wavelet Transforms For Anaylsis And Synthesis Of Feedfoward Neural Networks
Pati, Y. C., Krishnaprasad, P. S.
In this paper we show that discrete affine wavelet transforms can provide a tool for the analysis and synthesis of standard feedforward neural networks. It is shown that wavelet frames for L2(IR) can be constructed based upon sigmoids. The spatia-spectral localization property of wavelets can be exploited in defining the topology and determining the weights of a feedforward network. Training a network constructed using the synthesis procedure described here involves minimization of a convex cost functional and therefore avoids pitfalls inherent in standard backpropagation algorithms. Extension of these methods to L2(IRN) is also discussed.
Adaptive Range Coding
Rosen, Bruce E., Goodwin, James M., Vidal, Jacques J.
Determination of nearly optimalt or at least adequatet regions is left as an additional task that would require that the system dynamics be analyzedt which is not always possible. To address this problemt we move region boundaries adaptively t progressively altering the initial partitioning to a more appropriate representation with no need for a priori knowledge. Unlike previous work (Michiet 1968)t (Bartot 1983)t (Andersont 1982) which used fixed coderSt this approach produces adaptive coders that contract and expand regions/ranges. During adaptationt frequently active regions/ranges contractt reducing the number of situations in which they will be activated, and increasing the chances that neighboring regions will receive input instead. This class of self-organization is discussed in Kohonen (Kohonent 1984)t (Rittert 1986t 1988).
Phase-coupling in Two-Dimensional Networks of Interacting Oscillators
Niebur, Ernst, Kammen, Daniel M., Koch, Christof, Ruderman, Daniel L., Schuster, Heinz G.
Coherent oscillatory activity in large networks of biological or artificial neural units may be a useful mechanism for coding information pertaining to a single perceptual object or for detailing regularities within a data set. We consider the dynamics of a large array of simple coupled oscillators under a variety of connection schemes. Of particular interest is the rapid and robust phase-locking that results from a "sparse" scheme where each oscillator is strongly coupled to a tiny, randomly selected, subset of its neighbors.
Design and Implementation of a High Speed CMAC Neural Network Using Programmable CMOS Logic Cell Arrays
III, W. Thomas Miller, Box, Brian A., Whitney, Erich C., Glynn, James M.
A high speed implementation of the CMAC neural network was designed using dedicated CMOS logic. This technology was then used to implement two general purpose CMAC associative memory boards for the VME bus. Each board implements up to 8 independent CMAC networks with a total of one million adjustable weights. Each CMAC network can be configured to have from 1 to 512 integer inputs and from 1 to 8 integer outputs. Response times for typical CMAC networks are well below 1 millisecond, making the networks sufficiently fast for most robot control problems, and many pattern recognition and signal processing problems.
A Comparative Study of the Practical Characteristics of Neural Network and Conventional Pattern Classifiers
Ng, Kenney, Lippmann, Richard P.
Seven different neural network and conventional pattern classifiers were compared using artificial and speech recognition tasks. High order polynomial GMDH classifiers typically provided intermediate error rates and often required long training times and large amounts of memory. In addition, the decision regions formed did not generalize well to regions of the input space with little training data. Radial basis function classifiers generalized well in high dimensional spaces, and provided low error rates with training times that were much less than those of back-propagation classifiers (Lee and Lippmann, 1989). Gaussian mixture classifiers provided good performance when the numbers and types of mixtures were selected carefully to model class densities well. Linear tree classifiers were the most computationally ef- 976 Ng and Lippmann ficient but performed poorly with high dimensionality inputs and when the number of training patterns was small. KD-tree classifiers reduced classification time by a factor of four over conventional KNN classifiers for low 2-input dimension problems. They provided little or no reduction in classification time for high 22-input dimension problems. Improved condensed KNN classifiers reduced memory requirements over conventional KNN classifiers by a factor of two to fifteen for all problems, without increasing the error rate significantly.
Relaxation Networks for Large Supervised Learning Problems
Alspector, Joshua, Allen, Robert B., Jayakumar, Anthony, Zeppenfeld, Torsten, Meir, Ronny
Feedback connections are required so that the teacher signal on the output neurons can modify weights during supervised learning. Relaxation methods are needed for learning static patterns with full-time feedback connections. Feedback network learning techniques have not achieved wide popularity because of the still greater computational efficiency of back-propagation. We show by simulation that relaxation networks of the kind we are implementing in VLSI are capable of learning large problems just like back-propagation networks. A microchip incorporates deterministic mean-field theory learning as well as stochastic Boltzmann learning. A multiple-chip electronic system implementing these networks will make high-speed parallel learning in them feasible in the future.
Speech Recognition Using Demi-Syllable Neural Prediction Model
Iso, Ken-ichi, Watanabe, Takao
The Neural Prediction Model is the speech recognition model based on pattern prediction by multilayer perceptrons. Its effectiveness was confirmed by the speaker-independent digit recognition experiments. This paper presents an improvement in the model and its application to large vocabulary speech recognition, based on subword units. The improvement involves an introduction of "backward prediction," which further improves the prediction accuracy of the original model with only "forward prediction". In application of the model to speaker-dependent large vocabulary speech recognition, the demi-syllable unit is used as a subword recognition unit.
An Analog VLSI Splining Network
Schwartz, Daniel B., Samalam, Vijay K.
We have produced a VLSI circuit capable of learning to approximate arbitrary smooth of a single variable using a technique closely related to splines. The circuit effectively has 512 knots space on a uniform grid and has full support for learning. The circuit also can be used to approximate multi-variable functions as sum of splines. An interesting, and as of yet, nearly untapped set of applications for VLSI implementation of neural network learning systems can be found in adaptive control and nonlinear signal processing. In most such applications, the learning task consists of approximating a real function of a small number of continuous variables from discrete data points.