Goto

Collaborating Authors

Ensemble Learning: Stacking, Blending & Voting

#artificialintelligence

We have heard the phrase "unity is strength", whose meaning can be transferred to different areas of life. Sometimes correct answers to a specific problem are supported by several sources and not just one. This is what Ensemble Learning tries to do, that is, to put together a group of ML models to improve solutions to specific problems. Throughout this blog, we will learrn what Ensemble Learning is, what are the types of Ensembles that exist and we will specifically address Voting and Stacking Ensembles. Ensemble Learning refers to the use of ML algorithms jointly to solve classification and/or regression problems mainly. These algorithms can be the same type (homogeneous Ensemble Learning) or different types (heterogeneous Ensemble Learning).


Deeply Learning the Messages in Message Passing Inference

arXiv.org Machine Learning

Deep structured output learning shows great promise in tasks like semantic image segmentation. We proffer a new, efficient deep structured model learning scheme, in which we show how deep Convolutional Neural Networks (CNNs) can be used to estimate the messages in message passing inference for structured prediction with Conditional Random Fields (CRFs). With such CNN message estimators, we obviate the need to learn or evaluate potential functions for message calculation. This confers significant efficiency for learning, since otherwise when performing structured learning for a CRF with CNN potentials it is necessary to undertake expensive inference for every stochastic gradient iteration. The network output dimension for message estimation is the same as the number of classes, in contrast to the network output for general CNN potential functions in CRFs, which is exponential in the order of the potentials. Hence CNN message learning has fewer network parameters and is more scalable for cases that a large number of classes are involved. We apply our method to semantic image segmentation on the PASCAL VOC 2012 dataset. We achieve an intersection-over-union score of 73.4 on its test set, which is the best reported result for methods using the VOC training images alone. This impressive performance demonstrates the effectiveness and usefulness of our CNN message learning method.


Improve Machine Learning Results with Ensemble Learning

#artificialintelligence

NOTE: This article assumes that you are familiar with a basic understanding of Machine Learning algorithms. Suppose you want to buy a new mobile phone, will you walk directly to the first shop and purchase the mobile based on the advice of shopkeeper? You would visit some of the online mobile seller sites where you can see a variety of mobile phones, their specifications, features, and prices. You may also consider the reviews that people posted on the site. However, you probably might also ask your friends and colleagues for their opinions.


Unbiased Estimation of the Value of an Optimized Policy

arXiv.org Machine Learning

Randomized trials, also known as A/B tests, are used to select between two policies: a control and a treatment. Given a corresponding set of features, we can ideally learn an optimized policy P that maps the A/B test data features to action space and optimizes reward. However, although A/B testing provides an unbiased estimator for the value of deploying B (i.e., switching from policy A to B), direct application of those samples to learn the the optimized policy P generally does not provide an unbiased estimator of the value of P as the samples were observed when constructing P. In situations where the cost and risks associated of deploying a policy are high, such an unbiased estimator is highly desirable. We present a procedure for learning optimized policies and getting unbiased estimates for the value of deploying them. We wrap any policy learning procedure with a bagging process and obtain out-of-bag policy inclusion decisions for each sample. We then prove that inverse-propensity-weighting effect estimator is unbiased when applied to the optimized subset. Likewise, we apply the same idea to obtain out-of-bag unbiased per-sample value estimate of the measurement that is independent of the randomized treatment, and use these estimates to build an unbiased doubly-robust effect estimator. Lastly, we empirically shown that even when the average treatment effect is negative we can find a positive optimized policy.


Data Science is not "Statistics Done Wrong"

#artificialintelligence

I recently came across the thoughtful article "On Moving from Statistics to Machine Learning, the Final Stage of Grief". It makes some good points, and is worth the read. However, it also reminded me of the unexamined claim "data science is statistics done wrong." Frankly this is not the case, though it might be better for statisticians if it were. I'd like to touch on just one of the examples here.