Not enough data to create a plot.
Try a different view from the menu above.
Nakano, Ryohei
SMEM Algorithm for Mixture Models
Ueda, Naonori, Nakano, Ryohei, Ghahramani, Zoubin, Hinton, Geoffrey E.
We present a split and merge EM (SMEM) algorithm to overcome the local maximum problem in parameter estimation of finite mixture models. In the case of mixture models, non-global maxima often involve having too many components of a mixture model in one part of the space and too few in another, widely separated part of the space. To escape from such configurations we repeatedly perform simultaneous split and merge operations using a new criterion for efficiently selecting the split and merge candidates. We apply the proposed algorithm to the training of Gaussian mixtures and mixtures of factor analyzers using synthetic and real data and show the effectiveness of using the split and merge operations to improve the likelihood of both the training data and of held-out test data. 1 INTRODUCTION Mixture density models, in particular normal mixtures, have been extensively used in the field of statistical pattern recognition [1]. Recently, more sophisticated mixture density models such as mixtures of latent variable models (e.g., probabilistic PCA or factor analysis) have been proposed to approximate the underlying data manifold [2]-[4].
SMEM Algorithm for Mixture Models
Ueda, Naonori, Nakano, Ryohei, Ghahramani, Zoubin, Hinton, Geoffrey E.
Second-order Learning Algorithm with Squared Penalty Term
Saito, Kazumi, Nakano, Ryohei
This paper compares three penalty terms with respect to the efficiency ofsupervised learning, by using first-and second-order learning algorithms. Our experiments showed that for a reasonably adequate penaltyfactor, the combination of the squared penalty term and the second-order learning algorithm drastically improves the convergence performance more than 20 times over the other combinations, atthe same time bringing about a better generalization performance.
Second-order Learning Algorithm with Squared Penalty Term
Saito, Kazumi, Nakano, Ryohei
This paper compares three penalty terms with respect to the efficiency of supervised learning, by using first-and second-order learning algorithms. Our experiments showed that for a reasonably adequate penalty factor, the combination of the squared penalty term and the second-order learning algorithm drastically improves the convergence performance more than 20 times over the other combinations, at the same time bringing about a better generalization performance.
Deterministic Annealing Variant of the EM Algorithm
Ueda, Naonori, Nakano, Ryohei
We present a deterministic annealing variant of the EM algorithm maximum likelihood parameter estimation problems. In ourfor approach, the EM process is reformulated as the problem of minimizing the thermodynamic free energy by using the principle of maximum entropy and statistical mechanics analogy. Unlike simulated deterministicallyannealing approaches, this minimization is performed. Moreover, the derived algorithm, unlike the conventional better estimates free of the initialEM algorithm, can obtain parameter values.
Deterministic Annealing Variant of the EM Algorithm
Ueda, Naonori, Nakano, Ryohei
We present a deterministic annealing variant of the EM algorithm for maximum likelihood parameter estimation problems. In our approach, the EM process is reformulated as the problem of minimizing the thermodynamic free energy by using the principle of maximum entropy and statistical mechanics analogy. Unlike simulated annealing approaches, this minimization is deterministically performed. Moreover, the derived algorithm, unlike the conventional EM algorithm, can obtain better estimates free of the initial parameter values.