Not enough data to create a plot.
Try a different view from the menu above.
Country
Worst-Case Bounds for Gaussian Process Models
Kakade, Sham M., Seeger, Matthias W., Foster, Dean P.
Dean P. Foster University of Pennsylvania We present a competitive analysis of some nonparametric Bayesian algorithms ina worst-case online learning setting, where no probabilistic assumptions about the generation of the data are made. We consider models which use a Gaussian process prior (over the space of all functions) andprovide bounds on the regret (under the log loss) for commonly usednon-parametric Bayesian algorithms -- including Gaussian regression and logistic regression -- which show how these algorithms can perform favorably under rather general conditions.
Location-based activity recognition
Liao, Lin, Fox, Dieter, Kautz, Henry
Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. We show how to extract and label a person's activities and significant places from traces of GPS data. In contrast to existing techniques, our approach simultaneously detects and classifies the significant locations of a person and takes the high-level context into account. Our system uses relational Markov networks to represent the hierarchical activity model that encodes the complex relations among GPS readings, activities and significant places. We apply FFT-based message passing to perform efficient summation over large numbers of nodes in the networks.
Scaling Laws in Natural Scenes and the Inference of 3D Shape
Lee, Tai-sing, Potetz, Brian R.
This paper explores the statistical relationship between natural images and their underlying range (depth) images. We look at how this relationship changesover scale, and how this information can be used to enhance low resolution range data using a full resolution intensity image. Based on our findings, we propose an extension to an existing technique known as shape recipes [3], and the success of the two methods are compared using images and laser scans of real scenes. Our extension is shown to provide a twofold improvement over the current method. Furthermore, wedemonstrate that ideal linear shape-from-shading filters, when learned from natural scenes, may derive even more strength from shadow cues than from the traditional linear-Lambertian shading cues.
Metric Learning by Collapsing Classes
Globerson, Amir, Roweis, Sam T.
We present an algorithm for learning a quadratic Gaussian metric (Mahalanobis distance)for use in classification tasks. Our method relies on the simple geometric intuition that a good metric is one under which points in the same class are simultaneously near each other and far from points in the other classes. We construct a convex optimization problem whose solution generates such a metric by trying to collapse all examples in the same class to a single point and push examples in other classes infinitely far away. We show that when the metric we learn is used in simple classifiers, ityields substantial improvements over standard alternatives on a variety of problems. We also discuss how the learned metric may be used to obtain a compact low dimensional feature representation of the original input space, allowing more efficient classification with very little reduction in performance.
Large scale networks fingerprinting and visualization using the k-core decomposition
Alvarez-hamelin, J. I., Dall', asta, Luca, Barrat, Alain, Vespignani, Alessandro
We use the k-core decomposition to develop algorithms for the analysis of large scale complex networks. This decomposition, based on a recursive pruningof the least connected vertices, allows to disentangle the hierarchical structure of networks by progressively focusing on their central cores.By using this strategy we develop a general visualization algorithm thatcan be used to compare the structural properties of various networks andhighlight their hierarchical structure. The low computational complexity of the algorithm, O(n e), where n is the size of the network, ande is the number of edges, makes it suitable for the visualization of very large sparse networks. We show how the proposed visualization tool allows to find specific structural fingerprints of networks.
Efficient Estimation of OOMs
Jaeger, Herbert, Zhao, Mingjie, Kolling, Andreas
A standard method to obtain stochastic models for symbolic time series is to train state-emitting hidden Markov models (SE-HMMs) with the Baum-Welch algorithm. Based on observable operator models (OOMs), in the last few months a number of novel learning algorithms for similar purposeshave been developed: (1,2) two versions of an "efficiency sharpening" (ES) algorithm, which iteratively improves the statistical efficiency ofa sequence of OOM estimators, (3) a constrained gradient descent ML estimator for transition-emitting HMMs (TE-HMMs). We give an overview on these algorithms and compare them with SE-HMM/EM learning on synthetic and real-life data.
Variational EM Algorithms for Non-Gaussian Latent Variable Models
Palmer, Jason, Kreutz-Delgado, Kenneth, Rao, Bhaskar D., Wipf, David P.
We consider criteria for variational representations of non-Gaussian latent variables,and derive variational EM algorithms in general form. We establish a general equivalence among convex bounding methods, evidence basedmethods, and ensemble learning/Variational Bayes methods, which has previously been demonstrated only for particular cases.
Pattern Recognition from One Example by Chopping
Fleuret, Francois, Blanchard, Gilles
We investigate the learning of the appearance of an object from a single image of it. Instead of using a large number of pictures of the object to recognize, we use a labeled reference database of pictures of other objects tolearn invariance to noise and variations in pose and illumination. This acquired knowledge is then used to predict if two pictures of new objects, which do not appear on the training pictures, actually display the same object. We propose a generic scheme called chopping to address this task. It relies on hundreds of random binary splits of the training set chosen to keep together the images of any given object. Those splits are extended to the complete image space with a simple learning algorithm. Given two images, the responses of the split predictors are combined with a Bayesian rule into a posterior probability of similarity.