Improved Gaussian Mixture Density Estimates Using Bayesian Penalty Terms and Network Averaging
–Neural Information Processing Systems
We compare two regularization methods which can be used to improve the generalization capabilities of Gaussian mixture density estimates. The first method uses a Bayesian prior on the parameter space. We derive EM (Expectation Maximization) update rules which maximize the a posterior parameter probability. In the second approach we apply ensemble averaging to density estimation. This includes Breiman's "bagging", which recently has been found to produce impressive results for classification networks.
Neural Information Processing Systems
Dec-31-1996