Goto

Collaborating Authors

 North America



Map-Reduce for Machine Learning on Multicore

Neural Information Processing Systems

We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallelprogramming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain "summation form," which allows them to be easily parallelized onmulticore computers.


Dynamic Foreground/Background Extraction from Images and Videos using Random Patches

Neural Information Processing Systems

In this paper, we propose a novel exemplar-based approach to extract dynamic foreground regions from a changing background within a collection of images or a video sequence. By using image segmentation as a pre-processing step, we convert this traditional pixel-wise labeling problem into a lower-dimensional supervised, binarylabeling procedure on image segments. Our approach consists of three steps. First, a set of random image patches are spatially and adaptively sampled withineach segment. Second, these sets of extracted samples are formed into two "bags of patches" to model the foreground/background appearance, respectively.


Particle Filtering for Nonparametric Bayesian Matrix Factorization

Neural Information Processing Systems

Many unsupervised learning problems can be expressed as a form of matrix factorization, reconstructingan observed data matrix as the product of two matrices of latent variables. A standard challenge in solving these problems is determining the dimensionality of the latent matrices.


Efficient Structure Learning of Markov Networks using $L_1$-Regularization

Neural Information Processing Systems

Markov networks are commonly used in a wide variety of applications, ranging from computer vision, to natural language, to computational biology. In most current applications, even those that rely heavily on learned models, the structure of the Markov network is constructed by hand, due to the lack of effective algorithms forlearning Markov network structure from data. In this paper, we provide a computationally efficient method for learning Markov network structure from data.



Multi-Task Feature Learning

Neural Information Processing Systems

We present a method for learning a low-dimensional representation which is shared across a set of multiple related tasks. The method builds upon the wellknown 1-normregularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show that this problem is equivalent to a convex optimization problem and develop an iterative algorithm for solving it. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the latter step we learn commonacross-tasks representationsand in the former step we learn task-specific functions using these representations. We report experiments on a simulated and a real data set which demonstrate that the proposed method dramatically improves the performance relativeto learning each task independently. Our algorithm can also be used, as a special case, to simply select - not learn - a few common features across the tasks.


The Neurodynamics of Belief Propagation on Binary Markov Random Fields

Neural Information Processing Systems

We rigorously establish a close relationship between message passing algorithms and models of neurodynamics by showing that the equations of a continuous Hopfield networkcan be derived from the equations of belief propagation on a binary Markov random field. As Hopfield networks are equipped with a Lyapunov function, convergenceis guaranteed. As a consequence, in the limit of many weak connections perneuron, Hopfield networks exactly implement a continuous-time variant of belief propagation starting from message initialisations that prevent from running into convergence problems. Our results lead to a better understanding of the role of message passing algorithms in real biological neural networks.


Conditional Random Sampling: A Sketch-based Sampling Technique for Sparse Data

Neural Information Processing Systems

In large-scale applications, the data are often highly sparse. CRS combines sketching and sampling in that it converts sketches of the data into conditional random samples online in the estimation stage, with the sample size determined retrospectively.


Large Scale Hidden Semi-Markov SVMs

Neural Information Processing Systems

We describe Hidden Semi-Markov Support Vector Machines (SHM SVMs), an extension of HM SVMs to semi-Markov chains. This allows us to predict segmentations ofsequences based on segment-based features measuring properties such as the length of the segment. We propose a novel technique to partition the problem into sub-problems. The independently obtained partial solutions can then be recombined in an efficient way, which allows us to solve label sequence learning problemswith several thousands of labeled sequences. We have tested our algorithm for predicting gene structures, an important problem in computational biology. Results on a well-known model organism illustrate the great potential of SHM SVMs in computational biology.