On the Product Rule for Classification Problems
We discuss theoretical aspects of the product rule for classification problems in supervised machine learning for the case of combining classifiers. We show that (1) the product rule arises from the MAP classifier supposing equivalent priors and conditional independence given a class; (2) under some conditions, the product rule is equivalent to minimizing the sum of the squared distances to the respective centers of the classes related with different features, such distances being weighted by the spread of the classes; (3) observing some hypothesis, the product rule is equivalent to concatenating the vectors of features. With the advance of the Machine Learning field, and the discovery of many different techniques, the subject of combining multiple learners [2] eventually drove attention, in particular the problem of combining classifiers. Many different methods appeared, and soon they were compared in terms of efficiency in solving problems. The product rule has been present in some of these works (e.g., [1, 7, 3, 6, 5, 4, 8]), in contexts ranging from the accuracy of the different combination rules to some analytical properties of the different methods.
Jan-17-2013
- Country:
- Europe > United Kingdom
- England > Greater London > London (0.05)
- North America > United States
- District of Columbia > Washington (0.05)
- Massachusetts > Middlesex County
- Cambridge (0.05)
- New York (0.05)
- South America > Brazil
- Rio de Janeiro > Rio de Janeiro (0.05)
- Europe > United Kingdom
- Genre:
- Research Report (0.40)
- Technology: