Not enough data to create a plot.
Try a different view from the menu above.
Country
Common-Frame Model for Object Recognition
Moreels, Pierre, Perona, Pietro
A generative probabilistic model for objects in images is presented. An object consists of a constellation of features. Feature appearance and pose are modeled probabilistically. Scene images are generated by drawing aset of objects from a given database, with random clutter sprinkled on the remaining image surface.
Exploration-Exploitation Tradeoffs for Experts Algorithms in Reactive Environments
Farias, Daniela D., Megiddo, Nimrod
A reactive environment is one that responds to the actions of an agent rather than evolving obliviously. In reactive environments, experts algorithms must balance exploration and exploitation of experts more carefully than in oblivious ones. In addition, a more subtle definition of a learnable value of an expert is required. A general exploration-exploitation experts method is presented along with a proper definition of value. The method is shown to asymptotically perform as well as the best available expert. Several variants are analyzed from the viewpoint of the exploration-exploitation tradeoff, including explore-then-exploit, polynomially vanishing exploration, constant-frequency exploration, and constant-size exploration phases.Complexity and performance bounds are proven.
Markov Networks for Detecting Overalpping Elements in Sequence Data
Craven, Mark, Bockhorst, Joseph
Many sequential prediction tasks involve locating instances of patterns insequences. Generative probabilistic language models, such as hidden Markov models (HMMs), have been successfully applied to many of these tasks. A limitation of these models however, is that they cannot naturally handle cases in which pattern instances overlap in arbitrary ways. We present an alternative approach, based on conditional Markov networks, that can naturally represent arbitrarilyoverlapping elements. We show how to efficiently train and perform inference with these models. Experimental results froma genomics domain show that our models are more accurate at locating instances of overlapping patterns than are baseline models based on HMMs.
Proximity Graphs for Clustering and Manifold Learning
Zemel, Richard S., Carreira-Perpiรฑรกn, Miguel ร.
Many machine learning algorithms for clustering or dimensionality reduction takeas input a cloud of points in Euclidean space, and construct a graph with the input data points as vertices. This graph is then partitioned (clustering)or used to redefine metric information (dimensionality reduction). There has been much recent work on new methods for graph-based clustering and dimensionality reduction, but not much on constructing the graph itself. Graphs typically used include the fullyconnected graph,a local fixed-grid graph (for image segmentation) or a nearest-neighbor graph. We suggest that the graph should adapt locally to the structure of the data. This can be achieved by a graph ensemble that combines multiple minimum spanning trees, each fit to a perturbed version of the data set. We show that such a graph ensemble usually produces abetter representation of the data manifold than standard methods; and that it provides robustness to a subsequent clustering or dimensionality reductionalgorithm based on the graph.
Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons
This paper explores the computational consequences of simultaneous intrinsic andsynaptic plasticity in individual model neurons. It proposes a new intrinsic plasticity mechanism for a continuous activation model neuron based on low order moments of the neuron's firing rate distribution. Thegoal of the intrinsic plasticity mechanism is to enforce a sparse distribution of the neuron's activity level. In conjunction with Hebbian learning at the neuron's synapses, the neuron is shown to discover sparse directions in the input.
Efficient Out-of-Sample Extension of Dominant-Set Clusters
Pavan, Massimiliano, Pelillo, Marcello
Dominant sets are a new graph-theoretic concept that has proven to be relevant in pairwise data clustering problems, such as image segmentation. Theygeneralize the notion of a maximal clique to edgeweighted graphs and have intriguing, nontrivial connections to continuous quadratic optimization and spectral-based grouping. We address the problem of grouping out-of-sample examples after the clustering process has taken place. This may serve either to drastically reduce the computational burdenassociated to the processing of very large data sets, or to efficiently deal with dynamic situations whereby data sets need to be updated continually. We show that the very notion of a dominant set offers asimple and efficient way of doing this. Numerical experiments on various grouping problems show the effectiveness of the approach.
Result Analysis of the NIPS 2003 Feature Selection Challenge
Guyon, Isabelle, Gunn, Steve, Ben-Hur, Asa, Dror, Gideon
The NIPS 2003 workshops included a feature selection competition organizedby the authors. We provided participants with five datasets from different application domains and called for classification resultsusing a minimal number of features. The competition took place over a period of 13 weeks and attracted 78 research groups. Participants were asked to make online submissions on the validation and test sets, with performance on the validation set being presented immediately to the participant and performance on the test set presented to the participants at the workshop. In total 1863 entries were made on the validation sets during the development period and 135 entries on all test sets for the final competition. The winners used a combination of Bayesian neural networkswith ARD priors and Dirichlet diffusion trees. Other top entries used a variety of methods for feature selection, which combined filters and/or wrapper or embedded methods using Random Forests,kernel methods, or neural networks as a classification engine. The results of the benchmark (including the predictions made by the participants and the features they selected) and the scoring software are publicly available. The benchmark is available at www.nipsfsc.ecs.soton.ac.uk for post-challenge submissions to stimulate further research.