Goto

Collaborating Authors

scikit-learn: Building a Multi-Class Classification Ensemble - DZone AI

@machinelearnbot

For the Kaggle Spooky Author Identification competition, I wanted to combine multiple classifiers together into an ensemble -- and found the VotingClassifier that does exactly that.



ICPRAI 2018 SI: On dynamic ensemble selection and data preprocessing for multi-class imbalance learning

arXiv.org Machine Learning

Class-imbalance refers to classification problems in which many more instances are available for certain classes than for others. Such imbalanced datasets require special attention because traditional classifiers generally favor the majority class which has a large number of instances. Ensemble of classifiers have been reported to yield promising results. However, the majority of ensemble methods applied to imbalanced learning are static ones. Moreover, they only deal with binary imbalanced problems. Hence, this paper presents an empirical analysis of dynamic selection techniques and data preprocessing methods for dealing with multi-class imbalanced problems. We considered five variations of preprocessing methods and fourteen dynamic selection schemes. Our experiments conducted on 26 multi-class imbalanced problems show that the dynamic ensemble improves the AUC and the G-mean as compared to the static ensemble. Moreover, data preprocessing plays an important role in such cases.


Solving for multi-class: a survey and synthesis

arXiv.org Machine Learning

We review common methods of solving for multi-class from binary and generalize them to a common framework. Since conditional probabilties are useful both for quantifying the accuracy of an estimate and for calibration purposes, these are a required part of the solution. There is some indication that the best solution for multi-class classification is dependent on the particular dataset. As such, we are particularly interested in data-driven solution design, whether based on a priori considerations or empirical examination of the data. Numerical results indicate that while a one-size-fits-all solution consisting of one-versus-one is appropriate for most datasets, a minority will benefit from a more customized approach. The techniques discussed in this paper allow for a large variety of multi-class configurations and solution methods to be explored so as to optimize classification accuracy, accuracy of conditional probabilities and speed.


Multi-Class Learning: From Theory to Algorithm

Neural Information Processing Systems

In this paper, we study the generalization performance of multi-class classification and obtain a shaper data-dependent generalization error bound with fast convergence rate, substantially improving the state-of-art bounds in the existing data-dependent generalization analysis. The theoretical analysis motivates us to devise two effective multi-class kernel learning algorithms with statistical guarantees. Experimental results show that our proposed methods can significantly outperform the existing multi-class classification methods. Papers published at the Neural Information Processing Systems Conference.