Goto

Collaborating Authors

Results



Patterns, predictions, and actions: A story about machine learning

arXiv.org Machine Learning

This graduate textbook on machine learning tells a story of how patterns in data support predictions and consequential actions. Starting with the foundations of decision making, we cover representation, optimization, and generalization as the constituents of supervised learning. A chapter on datasets as benchmarks examines their histories and scientific bases. Self-contained introductions to causality, the practice of causal inference, sequential decision making, and reinforcement learning equip the reader with concepts and tools to reason about actions and their consequences. Throughout, the text discusses historical context and societal impact. We invite readers from all backgrounds; some experience with probability, calculus, and linear algebra suffices.



Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey

arXiv.org Artificial Intelligence

Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback. Despite many advances over the past three decades, learning in many domains still requires a large amount of interaction with the environment, which can be prohibitively expensive in realistic scenarios. To address this problem, transfer learning has been applied to reinforcement learning such that experience gained in one task can be leveraged when starting to learn the next, harder task. More recently, several lines of research have explored how tasks, or data samples themselves, can be sequenced into a curriculum for the purpose of learning a problem that may otherwise be too difficult to learn from scratch. In this article, we present a framework for curriculum learning (CL) in reinforcement learning, and use it to survey and classify existing CL methods in terms of their assumptions, capabilities, and goals. Finally, we use our framework to find open problems and suggest directions for future RL curriculum learning research.


Bayesian Machine Learning in Python: A/B Testing

#artificialintelligence

Link: Bayesian Machine Learning in Python: A/B Testing coupon code udemy Traditional A/B testing has been around for a long time, and it's full of approximations and confusing definitions. In this course, while we will do traditional A/B testing in order to appreciate its complexity, what we will eventually get to is the Bayesian machine learning way of doing things. First, we'll see if we can improve on ... Bestseller by Lazy Programmer Inc. What you'll learn Use adaptive algorithms to improve A/B testing performance Understand the difference between Bayesian and frequentist statistics Apply Bayesian methods to A/B testing Description This course is all about A/B testing. A/B testing is used everywhere.


A Bayesian Model for Simultaneous Image Clustering, Annotation and Object Segmentation

Neural Information Processing Systems

A non-parametric Bayesian model is proposed for processing multiple images. The analysis employs image features and, when present, the words associated with accompanying annotations. The model clusters the images into classes, and each image is segmented into a set of objects, also allowing the opportunity to assign a word to each object (localized labeling). Each object is assumed to be represented as a heterogeneous mix of components, with this realized via mixture models linking image features to object types. The number of image classes, number of object types, and the characteristics of the object-feature mixture models are inferred non-parametrically.


Convergence Rates of Active Learning for Maximum Likelihood Estimation

Neural Information Processing Systems

An active learner is given a class of models, a large set of unlabeled examples, and the ability to interactively query labels of a subset of these examples; the goal of the learner is to learn a model in the class that fits the data well. Previous theoretical work has rigorously characterized label complexity of active learning, but most of this work has focused on the PAC or the agnostic PAC model. In this paper, we shift our attention to a more general setting -- maximum likelihood estimation. Provided certain conditions hold on the model class, we provide a two-stage active learning algorithm for this problem. The conditions we require are fairly general, and cover the widely popular class of Generalized Linear Models, which in turn, include models for binary and multi-class classification, regression, and conditional random fields.


A Tutorial on Learning With Bayesian Networks

arXiv.org Machine Learning

A Bayesian network is a graphical model that encodes probabilistic relationships among variables of interest. When used in conjunction with statistical techniques, the graphical model has several advantages for data analysis. One, because the model encodes dependencies among all variables, it readily handles situations where some data entries are missing. Two, a Bayesian network can be used to learn causal relationships, and hence can be used to gain understanding about a problem domain and to predict the consequences of intervention. Three, because the model has both a causal and probabilistic semantics, it is an ideal representation for combining prior knowledge (which often comes in causal form) and data. Four, Bayesian statistical methods in conjunction with Bayesian networks offer an efficient and principled approach for avoiding the overfitting of data. In this paper, we discuss methods for constructing Bayesian networks from prior knowledge and summarize Bayesian statistical methods for using data to improve these models. With regard to the latter task, we describe methods for learning both the parameters and structure of a Bayesian network, including techniques for learning with incomplete data. In addition, we relate Bayesian-network methods for learning to techniques for supervised and unsupervised learning. We illustrate the graphical-modeling approach using a real-world case study.


The Reciprocal Bayesian LASSO

arXiv.org Machine Learning

Throughout the course of the paper, we assume that y and X have been centered at 0 so there is no intercept in the model, where y is the n 1 vector of centered responses, X is the n p matrix of standardized regressors, β is the p 1 vector of coefficients to be estimated, and null is the n 1 vector of independent and identically distributed normal errors with mean 0 and variance σ 2 . Compared to traditional penalization functions that are usually symmetric about 0, continuous and nondecreasing in (0,), the rLASSO penalty functions are decreasing in (0,), discontinuous at 0, and converge to infinity when the coefficients approach zero. From a theoretical standpoint, rLASSO shares the same oracle property and same rate of estimation error with other LASSOtype penalty functions. An early reference to this class of models can be found in Song and Liang (2015), with more recent papers focusing on large sample asymptotics, along with computational strategies for frequentist estimation (Shin et al., 2018; Song, 2018). Our approach differs from this line of work in adopting a Bayesian perspective on rLASSO estimation. Ideally, a Bayesian solution can be obtained by placing appropriate priors on the regression coefficients that will mimic the effects of the rLASSO penalty. As apparent from (1), this arises in assuming a prior for β that decomposes as a product of independent inverse Laplace (double exponential) densities: π (β) p null j 1 λ 2β 2 j exp{ λ β j }I { β j null 0 }.


Interactive Open-Ended Learning for 3D Object Recognition

arXiv.org Artificial Intelligence

The thesis contributes in several important ways to the research area of 3D object category learning and recognition. To cope with the mentioned limitations, we look at human cognition, in particular at the fact that human beings learn to recognize object categories ceaselessly over time. This ability to refine knowledge from the set of accumulated experiences facilitates the adaptation to new environments. Inspired by this capability, we seek to create a cognitive object perception and perceptual learning architecture that can learn 3D object categories in an open-ended fashion. In this context, ``open-ended'' implies that the set of categories to be learned is not known in advance, and the training instances are extracted from actual experiences of a robot, and thus become gradually available, rather than being available since the beginning of the learning process. In particular, this architecture provides perception capabilities that will allow robots to incrementally learn object categories from the set of accumulated experiences and reason about how to perform complex tasks. This framework integrates detection, tracking, teaching, learning, and recognition of objects. An extensive set of systematic experiments, in multiple experimental settings, was carried out to thoroughly evaluate the described learning approaches. Experimental results show that the proposed system is able to interact with human users, learn new object categories over time, as well as perform complex tasks. The contributions presented in this thesis have been fully implemented and evaluated on different standard object and scene datasets and empirically evaluated on different robotic platforms.