Country
Variable KD-Tree Algorithms for Spatial Pattern Search and Discovery
Kubica, Jeremy, Masiero, Joseph, Jedicke, Robert, Connolly, Andrew, Moore, Andrew W.
In this paper we consider the problem of finding sets of points that conform toa given underlying model from within a dense, noisy set of observations. Thisproblem is motivated by the task of efficiently linking faint asteroid detections, but is applicable to a range of spatial queries. We survey current tree-based approaches, showing a tradeoff exists between singletree and multiple tree algorithms. To this end, we present a new type of multiple tree algorithm that uses a variable number of trees to exploit the advantages of both approaches. We empirically show that this algorithm performs well using both simulated and astronomical data.
Data-Driven Online to Batch Conversions
Online learning algorithms are typically fast, memory efficient, and simple toimplement. However, many common learning problems fit more naturally in the batch learning setting. The power of online learning algorithms can be exploited in batch settings by using online-to-batch conversions techniques which build a new batch algorithm from an existing onlinealgorithm. We first give a unified overview of three existing online-to-batch conversion techniques which do not use training data in the conversion process. We then build upon these data-independent conversions to derive and analyze data-driven conversions.
Inference with Minimal Communication: a Decision-Theoretic Variational Approach
Kreidl, O. P., Willsky, Alan S.
Given a directed graphical model with binary-valued hidden nodes and real-valued noisy observations, consider deciding upon the maximum a-posteriori (MAP) or the maximum posterior-marginal (MPM) assignment underthe restriction that each node broadcasts only to its children exactly one single-bit message. We present a variational formulation, viewing the processing rules local to all nodes as degrees-of-freedom, that minimizes the loss in expected (MAP or MPM) performance subject to such online communication constraints. The approach leads to a novel message-passing algorithm to be executed offline, or before observations are realized, which mitigates the performance loss by iteratively coupling allrules in a manner implicitly driven by global statistics. We also provide (i) illustrative examples, (ii) assumptions that guarantee convergence andefficiency and (iii) connections to active research areas.
Multiple Instance Boosting for Object Detection
Zhang, Cha, Platt, John C., Viola, Paul A.
A good image object detection algorithm is accurate, fast, and does not require exact locations of objects in a training set. We can create such an object detector by taking the architecture of the Viola-Jones detector cascade and training it with a new variant of boosting that we call MIL-Boost. MILBoost uses cost functions from the Multiple Instance Learning literaturecombined with the AnyBoost framework. We adapt the feature selection criterion of MILBoost to optimize the performance of the Viola-Jones cascade. Experiments show that the detection rate is up to 1.6 times better using MILBoost. This increased detection rate shows the advantage of simultaneously learning the locations and scales of the objects in the training set along with the parameters of the classifier.
Off-Road Obstacle Avoidance through End-to-End Learning
Muller, Urs, Ben, Jan, Cosatto, Eric, Flepp, Beat, Cun, Yann L.
We describe a vision-based obstacle avoidance system for off-road mobile robots.The system is trained from end to end to map raw input images to steering angles. It is trained in supervised mode to predict the steering angles provided by a human driver during training runs collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two forwardpointing wirelesscolor cameras. A remote computer processes the video and controls the robot via radio. The learning system is a large 6-layer convolutional network whose input is a single left/right pair of unprocessed low-resolutionimages. The robot exhibits an excellent ability to detect obstacles and navigate around them in real time at speeds of 2 m/s.
Distance Metric Learning for Large Margin Nearest Neighbor Classification
Weinberger, Kilian Q., Blitzer, John, Saul, Lawrence K.
We show how to learn a Mahanalobis distance metric for k-nearest neighbor (kNN)classification by semidefinite programming. The metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. On seven data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification--for example, achieving a test error rate of 1.3% on the MNIST handwritten digits. As in support vector machines (SVMs), the learning problem reduces to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our framework requires no modification or extension for problems in multiway (as opposed to binary) classification.
Divergences, surrogate loss functions and experimental design
Nguyen, XuanLong, Wainwright, Martin J., Jordan, Michael I.
In this paper, we provide a general theorem that establishes a correspondence betweensurrogate loss functions in classification and the family of f-divergences. Moreover, we provide constructive procedures for determining the f-divergence induced by a given surrogate loss, and conversely for finding all surrogate loss functions that realize a given f-divergence. Next we introduce the notion of universal equivalence among loss functions and corresponding f-divergences, and provide necessary andsufficient conditions for universal equivalence to hold. These ideas have applications to classification problems that also involve a component ofexperiment design; in particular, we leverage our results to prove consistency of a procedure for learning a classifier under decentralization requirements.