Combining Estimators Using Non-Constant Weighting Functions

Tresp, Volker, Taniguchi, Michiaki

Neural Information Processing Systems 

Volker Tresp*and Michiaki Taniguchi Siemens AG, Central Research Otto-Hahn-Ring 6 81730 Miinchen, Germany Abstract This paper discusses the linearly weighted combination of estimators inwhich the weighting functions are dependent on the input. We show that the weighting functions can be derived either by evaluating the input dependent variance of each estimator or by estimating how likely it is that a given estimator has seen data in the region of the input space close to the input pattern. The latter solutionis closely related to the mixture of experts approach and we show how learning rules for the mixture of experts can be derived from the theory about learning with missing features. The presented approaches are modular since the weighting functions can easily be modified (no retraining) if more estimators are added. Furthermore,it is easy to incorporate estimators which were not derived from data such as expert systems or algorithms. 1 Introduction Instead of modeling the global dependency between input x E D and output y E using a single estimator, it is often very useful to decompose a complex mapping -'\.t the time of the research for this paper, a visiting researcher at the Center for Biological and Computational Learning, MIT.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found