Inductive Learning


Toward ML-Centric Cloud Platforms

Communications of the ACM

Cloud platforms, such as Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform, are tremendously complex. Its main resource management systems include virtual machine (VM) and container (hereafter we refer to VMs and containers simply as "containers") scheduling, server and container health monitoring and repairs, power and energy management, and other management functions. Cloud platforms are also extremely expensive to build and operate, so providers have a strong incentive to optimize their use. A nascent approach is to leverage machine learning (ML) in the platforms' resource management using supervised learning techniques, such as gradient-boosted trees and neural networks, or reinforcement learning. We also discuss why ML is often preferable to traditional non-ML techniques.


Machine Learning Necessary for Deep Learning

#artificialintelligence

An agreed upon definition of machine learning is, a computer program is said to have learned when it's performance measure P at task T improves with experience E. Under the definition of Supervised Learning, we get this diagram. Here the experience would be the training data required to improve the algorithm. In practice we put this data into the Design Matrix. Design Matrix [dəˈzīn ˈmātriks]: term -- if a single input can be represented as a vector, putting all of the training examples, i.e the vectors, into 1 matrix makes the entire input aspects of the training data. This is not all of the experience.


SEERL: Sample Efficient Ensemble Reinforcement Learning

#artificialintelligence

Ensemble learning is a very prevalent method employed in machine learning. The relative success of ensemble methods is attributed to its ability to tackle a wide range of instances and complex problems that require different low-level approaches. However, ensemble methods are relatively less popular in reinforcement learning owing to the high sample complexity and computational expense involved. We present a new training and evaluation framework for model-free algorithms that use ensembles of policies obtained from a single training instance. These policies are diverse in nature and are learned through directed perturbation of the model parameters at regular intervals.


Putting Machine Learning to Work 7wData

#artificialintelligence

Supervised learning solves modern analytics challenges and drives informed organizational decisions. Although the predictive power of machine learning models can be very impressive, there is no benefit unless they drive actions. Models must be deployed in an automated fashion to continually support decision making for measurable benefit. And while unsupervised methods open powerful analytic opportunities, they do not come with a clear path to deployment. This course will clarify when each approach best fits the business need and show you how to derive value from both approaches.


Why We Must Unshackle AI From the Boundaries of Human Knowledge

#artificialintelligence

Artificial intelligence (AI) has made astonishing progress in the last decade. AI can now drive cars, diagnose diseases from medical images, recommend movies, even whom you should date, make investment decisions, and create art that people have sold at auction. A lot of research today, however, focuses on teaching AI to do things the way we do them. For example, computer vision and natural language processing – two of the hottest research areas in the field – deal with building AI models that can see like humans and use language like humans. But instead of teaching computers to imitate human thought, the time has now come to let them evolve on their own, so instead of becoming like us, they have a chance to become better than us.


A Transductive Bound for the Voted Classifier with an Application to Semi-supervised Learning

Neural Information Processing Systems

In this paper we present two transductive bounds on the risk of the majority vote estimated over partially labeled training sets. Our first bound is tight when the additional unlabeled training data are used in the cases where the voted classifier makes its errors on low margin observations and where the errors of the associated Gibbs classifier can accurately be estimated. In semi-supervised learning, considering the margin as an indicator of confidence constitutes the working hypothesis of algorithms which search the decision boundary on low density regions. In this case, we propose a second bound on the joint probability that the voted classifier makes an error over an example having its margin over a fixed threshold. As an application we are interested on self-learning algorithms which assign iteratively pseudo-labels to unlabeled training examples having margin above a threshold obtained from this bound.


Contextual semibandits via supervised learning oracles

Neural Information Processing Systems

We study an online decision making problem where on each round a learner chooses a list of items based on some side information, receives a scalar feedback value for each individual item, and a reward that is linearly related to this feedback. These problems, known as contextual semibandits, arise in crowdsourcing, recommendation, and many other domains. This paper reduces contextual semibandits to supervised learning, allowing us to leverage powerful supervised learning methods in this partial-feedback setting. Our first reduction applies when the mapping from feedback to reward is known and leads to a computationally efficient algorithm with near-optimal regret. We show that this algorithm outperforms state-of-the-art approaches on real-world learning-to-rank datasets, demonstrating the advantage of oracle-based algorithms.


Regularized Boost for Semi-Supervised Learning

Neural Information Processing Systems

Semi-supervised inductive learning concerns how to learn a decision rule from a data set containing both labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes local smoothness constraints among data into account during ensemble learning. In this paper, we introduce a local smoothness regularizer to semi-supervised boosting algorithms based on the universal optimization framework of margin cost functionals. Our regularizer is applicable to existing semi-supervised boosting algorithms to improve their generalization and speed up their training.


Posterior Consistency of the Silverman g-prior in Bayesian Model Choice

Neural Information Processing Systems

Kernel supervised learning methods can be unified by utilizing the tools from regularization theory. The duality between regularization and prior leads to interpreting regularization methods in terms of maximum a posteriori estimation and has motivated Bayesian interpretations of kernel methods. In this paper we pursue a Bayesian interpretation of sparsity in the kernel setting by making use of a mixture of a point-mass distribution and prior that we refer to as Silverman's g-prior.'' We provide a theoretical analysis of the posterior consistency of a Bayesian model choice procedure based on this prior. We also establish the asymptotic relationship between this procedure and the Bayesian information criterion.


More data means less inference: A pseudo-max approach to structured learning

Neural Information Processing Systems

The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference in this setting are intractable. Here we show that it is possible to circumvent this difficulty when the input distribution is rich enough via a method similar in spirit to pseudo-likelihood. We show how our new method achieves consistency, and illustrate empirically that it indeed performs as well as exact methods when sufficiently large training sets are used. Papers published at the Neural Information Processing Systems Conference.