Goto

Collaborating Authors

 Country


Breaking SVM Complexity with Cross-Training

Neural Information Processing Systems

We propose to selectively remove examples from the training set using probabilistic estimates related to editing algorithms (Devijver and Kittler, 1982). This heuristic procedure aims at creating a separable distribution of training examples with minimal impact on the position of the decision boundary. It breaks the linear dependency between the number of SVs and the number of training examples, and sharply reduces the complexity of SVMs during both the training and prediction stages.


Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid

Neural Information Processing Systems

Computation without stable states is a computing paradigm different fromTuring's and has been demonstrated for various types of simulated neural networks. This publication transfers this to a hardware implemented neural network. Results of a software implementation arereproduced showing that the performance peaks when the network exhibits dynamics at the edge of chaos. The liquid computing approach seems well suited for operating analog computing devices such as the used VLSI neural network.


A Topographic Support Vector Machine: Classification Using Local Label Configurations

Neural Information Processing Systems

The standard approach to the classification of objects is to consider the examples as independent and identically distributed (iid). In many real world settings, however, this assumption is not valid, because a topographical relationshipexists between the objects. In this contribution we consider the special case of image segmentation, where the objects are pixels and where the underlying topography is a 2D regular rectangular grid. We introduce a classification method which not only uses measured vectorial feature information but also the label configuration within a topographic neighborhood.Due to the resulting dependence between the labels of neighboring pixels, a collective classification of a set of pixels becomes necessary. We propose a new method called'Topographic Support VectorMachine' (TSVM), which is based on a topographic kernel and a self-consistent solution to the label assignment shown to be equivalent toa recurrent neural network. The performance of the algorithm is compared to a conventional SVM on a cell image segmentation task.


Parallel Support Vector Machines: The Cascade SVM

Neural Information Processing Systems

We describe an algorithm for support vector machines (SVM) that can be parallelized efficiently and scales to very large problems with hundreds of thousands of training vectors. Instead of analyzing the whole training set in one optimization step, the data are split into subsets and optimized separately with multiple SVMs. The partial results are combined and filtered again in a'Cascade' of SVMs, until the global optimum is reached. The Cascade SVM can be spread over multiple processors with minimal communication overhead and requires far less memory, since the kernel matrices are much smaller than for a regular SVM. Convergence to the global optimum is guaranteed with multiple passes through the Cascade, but already a single pass provides good generalization. A single pass is 5x - 10x faster than a regular SVM for problems of 100,000 vectors when implemented on a single processor. Parallel implementations on a cluster of 16 processors were tested with over 1 million vectors (2-class problems), converging in a day or two, while a regular SVM never converged in over a week.


Constraining a Bayesian Model of Human Visual Speed Perception

Neural Information Processing Systems

It has been demonstrated that basic aspects of human visual motion perception arequalitatively consistent with a Bayesian estimation framework, where the prior probability distribution on velocity favors slow speeds. Here, we present a refined probabilistic model that can account for the typical trial-to-trial variabilities observed in psychophysical speed perception experiments. We also show that data from such experiments can be used to constrain both the likelihood and prior functions of the model. Specifically, we measured matching speeds and thresholds in a two-alternative forced choice speed discrimination task. Parametric fits to the data reveal that the likelihood function is well approximated by a LogNormal distribution with a characteristic contrast-dependent variance, andthat the prior distribution on velocity exhibits significantly heavier tails than a Gaussian, and approximately follows a power-law function.


Active Learning for Anomaly and Rare-Category Detection

Neural Information Processing Systems

We introduce a novel active-learning scenario in which a user wants to work with a learning algorithm to identify useful anomalies. These are distinguished from the traditional statistical definition of anomalies as outliers or merely ill-modeled points. Our distinction is that the usefulness ofanomalies is categorized subjectively by the user. We make two additional assumptions. First, there exist extremely few useful anomalies tobe hunted down within a massive dataset.


Joint Probabilistic Curve Clustering and Alignment

Neural Information Processing Systems

Clustering and prediction of sets of curves is an important problem in many areas of science and engineering. It is often the case that curves tend to be misaligned from each other in a continuous manner, either in space (across the measurements) or in time. We develop a probabilistic framework that allows for joint clustering and continuous alignment of sets of curves in curve space (as opposed to a fixed-dimensional featurevector space).The proposed methodology integrates new probabilistic alignment models with model-based curve clustering algorithms. The probabilistic approach allows for the derivation of consistent EM learning algorithmsfor the joint clustering-alignment problem. Experimental results are shown for alignment of human growth data, and joint clustering andalignment of gene expression time-course data.


Learning Preferences for Multiclass Problems

Neural Information Processing Systems

Many interesting multiclass problems can be cast in the general framework oflabel ranking defined on a given set of classes. The evaluation for such a ranking is generally given in terms of the number of violated order constraints between classes. In this paper, we propose the Preference LearningModel as a unifying framework to model and solve a large class of multiclass problems in a large margin perspective. In addition, an original kernel-based method is proposed and evaluated on a ranking dataset with state-of-the-art results.


Supervised Graph Inference

Neural Information Processing Systems

We formulate the problem of graph inference where part of the graph is known as a supervised learning problem, and propose an algorithm to solve it. The method involves the learning of a mapping of the vertices to a Euclidean space where the graph is easy to infer, and can be formulated asan optimization problem in a reproducing kernel Hilbert space. We report encouraging results on the problem of metabolic network reconstruction fromgenomic data.