Maximum Likelihood Competitive Learning
–Neural Information Processing Systems
One popular class of unsupervised algorithms are competitive algorithms. Inthe traditional view of competition, only one competitor, the winner, adapts for any given case. I propose to view competitive adaptationas attempting to fit a blend of simple probability generators (such as gaussians) to a set of data-points. The maximum likelihoodfit of a model of this type suggests a "softer" form of competition, in which all competitors adapt in proportion to the relative probability that the input came from each competitor. I investigate one application of the soft competitive model, placement ofradial basis function centers for function interpolation, and show that the soft model can give better performance with little additional computational cost. 1 INTRODUCTION Interest in unsupervised learning has increased recently due to the application of more sophisticated mathematical tools (Linsker, 1988; Plumbley and Fallside, 1988; Sanger, 1989) and the success of several elegant simulations of large scale selforganization (Linsker,1986; Kohonen, 1982). One popular class of unsupervised algorithms are competitive algorithms, which have appeared as components in a variety of systems (Von der Malsburg, 1973; Fukushima, 1975; Grossberg, 1978). Generalizing the definition of Rumelhart and Zipser (1986), a competitive adaptive system consists of a collection of modules which are structurally identical except, possibly, for random initial parameter variation.
Neural Information Processing Systems
Dec-31-1990
- Country:
- Asia > Japan
- Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.24)
- North America
- Canada > Ontario
- Toronto (0.16)
- United States (0.46)
- Canada > Ontario
- Asia > Japan