Goto

Collaborating Authors

 Directed Networks


Maximum Likelihood Competitive Learning

Neural Information Processing Systems

One popular class of unsupervised algorithms are competitive algorithms. In the traditional view of competition, only one competitor, the winner, adapts for any given case. I propose to view competitive adaptation as attempting to fit a blend of simple probability generators (such as gaussians) to a set of data-points. The maximum likelihood fit of a model of this type suggests a "softer" form of competition, in which all competitors adapt in proportion to the relative probability that the input came from each competitor. I investigate one application of the soft competitive model, placement of radial basis function centers for function interpolation, and show that the soft model can give better performance with little additional computational cost. 1 INTRODUCTION Interest in unsupervised learning has increased recently due to the application of more sophisticated mathematical tools (Linsker, 1988; Plumbley and Fallside, 1988; Sanger, 1989) and the success of several elegant simulations of large scale selforganization (Linsker, 1986; Kohonen, 1982). One popular class of unsupervised algorithms are competitive algorithms, which have appeared as components in a variety of systems (Von der Malsburg, 1973; Fukushima, 1975; Grossberg, 1978). Generalizing the definition of Rumelhart and Zipser (1986), a competitive adaptive system consists of a collection of modules which are structurally identical except, possibly, for random initial parameter variation.


Bayesian Inference of Regular Grammar and Markov Source Models

Neural Information Processing Systems

In this paper we develop a Bayes criterion which includes the Rissanen complexity, for inferring regular grammar models. We develop two methods for regular grammar Bayesian inference. The fIrst method is based on treating the regular grammar as a I-dimensional Markov source, and the second is based on the combinatoric characteristics of the regular grammar itself. We apply the resulting Bayes criteria to a particular example in order to show the efficiency of each method.


Bayesian Inference of Regular Grammar and Markov Source Models

Neural Information Processing Systems

In this paper we develop a Bayes criterion which includes the Rissanen complexity, for inferring regular grammar models. We develop two methods for regular grammar Bayesian inference. The fIrst method is based on treating the regular grammar as a I-dimensional Markov source, and the second is based on the combinatoric characteristics of the regular grammar itself. We apply the resulting Bayes criteria to a particular example in order to show the efficiency of each method.







Performance of Synthetic Neural Network Classification of Noisy Radar Signals

Neural Information Processing Systems

This study evaluates the performance of the multilayer-perceptron and the frequency-sensitive competitive learning network in identifying five commercial aircraft from radar backscatter measurements. The performance of the neural network classifiers is compared with that of the nearest-neighbor and maximum-likelihood classifiers. Our results indicate that for this problem, the neural network classifiers are relatively insensitive to changes in the network topology, and to the noise level in the training data. While, for this problem, the traditional algorithms outperform these simple neural classifiers, we feel that neural networks show the potential for improved performance.


Performance of Synthetic Neural Network Classification of Noisy Radar Signals

Neural Information Processing Systems

This study evaluates the performance of the multilayer-perceptron and the frequency-sensitive competitive learning network in identifying five commercial aircraft from radar backscatter measurements. The performance of the neural network classifiers is compared with that of the nearest-neighbor and maximum-likelihood classifiers. Our results indicate that for this problem, the neural network classifiers are relatively insensitive to changes in the network topology, and to the noise level in the training data. While, for this problem, the traditional algorithms outperform these simple neural classifiers, we feel that neural networks show the potential for improved performance.