Goto

Collaborating Authors

Universal Supervised Learning for Individual Data

arXiv.org Machine Learning

Universal supervised learning is considered from an information theoretic point of view following the universal prediction approach, see Merhav and Feder (1998). We consider the standard supervised "batch" learning where prediction is done on a test sample once the entire training data is observed, and the individual setting where the features and labels, both in the training and test, are specific individual quantities. The information theoretic approach naturally uses the self-information loss or log-loss. Our results provide universal learning schemes that compete with a "genie" (or reference) that knows the true test label. In particular, it is demonstrated that the main proposed scheme, termed Predictive Normalized Maximum Likelihood (pNML), is a robust learning solution that outperforms the current leading approach based on Empirical Risk Minimization (ERM). Furthermore, the pNML construction provides a pointwise indication for the learnability of the specific test challenge with the given training examples


Incremental Learning for Metric-Based Meta-Learners

arXiv.org Machine Learning

Majority of the modern meta-learning methods for few-shot classification tasks operate in two phases: a meta-training phase where the meta-learner learns a generic representation by solving multiple few-shot tasks sampled from a large dataset and a testing phase, where the meta-learner leverages its learnt internal representation for a specific few-shot task involving classes which were not seen during the meta-training phase. To the best of our knowledge, all such meta-learning methods use a single base dataset for meta-training to sample tasks from and do not adapt the algorithm after meta-training. This strategy may not scale to real-world use-cases where the meta-learner does not potentially have access to the full meta-training dataset from the very beginning and we need to update the meta-learner in an incremental fashion when additional training data becomes available. Through our experimental setup, we develop a notion of incremental learning during the meta-training phase of meta-learning and propose a method which can be used with multiple existing metric-based meta-learning algorithms. Experimental results on benchmark dataset show that our approach performs favorably at test time as compared to training a model with the full meta-training set and incurs negligible amount of catastrophic forgetting


Learning from Imbalanced Classes - Silicon Valley Data Science

#artificialintelligence

If you're fresh from a machine learning course, chances are most of the datasets you used were fairly easy. Among other things, when you built classifiers, the example classes were balanced, meaning there were approximately the same number of examples of each class. Instructors usually employ cleaned up datasets so as to concentrate on teaching specific algorithms or techniques without getting distracted by other issues. Usually you're shown examples like the figure below in two dimensions, with points representing examples and different colors (or shapes) of the points representing the class: The goal of a classification algorithm is to attempt to learn a separator (classifier) that can distinguish the two. But when you start looking at real, uncleaned data one of the first things you notice is that it's a lot noisier and imbalanced.


Start coding with this comprehensive master class

Mashable

TL;DR: The Complete C# Master Class Course is on sale for £10.55 as of June 30, saving you 93% on list price. Remember back in the day when you had to spend years in school and/or your life savings to learn to code? Those days are long gone. Getting into coding is easier -- and more affordable -- than ever before. Since there are approximately a billion coding languages out there, choosing one to start with is no easy feat.


Logistic Regression as Soft Perceptron Learning

arXiv.org Machine Learning

We comment on the fact that gradient ascent for logistic regression has a connection with the perceptron learning algorithm. Logistic learning is the "soft" variant of perceptron learning.