Goto

Collaborating Authors

 Technology


Spike-Based Compared to Rate-Based Hebbian Learning

Neural Information Processing Systems

For example, a'Hebbian' (Hebb 1949) learning rule which is driven by the correlations between presynaptic and postsynaptic rates may be used to generate neuronal receptive fields (e.g., Linsker 1986, MacKay and Miller 1990, Wimbauer et al. 1997) with properties similar to those of real neurons. A rate-based description, however, neglects effects which are due to the pulse structure of neuronal signals.


Orientation, Scale, and Discontinuity as Emergent Properties of Illusory Contour Shape

Neural Information Processing Systems

A recent neural model of illusory contour formation is based on a distribution of natural shapes traced by particles moving with constant speed in directions given by Brownian motions. The input to that model consists of pairs of position and direction constraints and the output consists of the distribution of contours joining all such pairs. In general, these contours will not be closed and their distribution will not be scale-invariant. In this paper, we show how to compute a scale-invariant distribution of closed contours given position constraints alone and use this result to explain a well known illusory contour effect. 1 INTRODUCTION It has been proposed by Mumford[3] that the distribution of illusory contour shapes can be modeled by particles travelling with constant speed in directions given by Brownian motions. More recently, Williams and Jacobs[7, 8] introduced the notion of a stochastic completion field, the distribution of particle trajectories joining pairs of position and direction constraints, and showed how it could be computed in a local parallel network.


Direct Optimization of Margins Improves Generalization in Combined Classifiers

Neural Information Processing Systems

The dark curve is AdaBoost, the light curve is DOOM. DOOM sacrifices significant training error forimproved test error (horizontal markson margin 0 line)_ 1 Introduction Many learning algorithms for pattern classification minimize some cost function of the training data, with the aim of minimizing error (the probability of misclassifying an example). One example of such a cost function is simply the classifier's error on the training data.


General Bounds on Bayes Errors for Regression with Gaussian Processes

Neural Information Processing Systems

Based on a simple convexity lemma, we develop bounds for different typesof Bayesian prediction errors for regression with Gaussian processes. The basic bounds are formulated for a fixed training set. Simpler expressions are obtained for sampling from an input distribution whichequals the weight function of the covariance kernel, yielding asymptotically tight results. The results are compared with numerical experiments.



Support Vector Machines Applied to Face Recognition

Neural Information Processing Systems

On the other hand, in 804 P.J Phillips face recognition, there are many individuals (classes), and only a few images (samples) per person, and algorithms must recognize faces by extrapolating from the training samples. In numerous applications there can be only one training sample (image) of each person. Support vector machines (SVMs) are formulated to solve a classical two class pattern recognition problem. We adapt SVM to face recognition by modifying the interpretation of the output of a SVM classifier and devising a representation of facial images that is concordant with a two class problem. Traditional SVM returns a binary value, the class of the object.


A Randomized Algorithm for Pairwise Clustering

Neural Information Processing Systems

We present a stochastic clustering algorithm based on pairwise similarity ofdatapoints. Our method extends existing deterministic methods, including agglomerative algorithms, min-cut graph algorithms, andconnected components. Thus it provides a common framework for all these methods. Our graph-based method differs from existing stochastic methods which are based on analogy to physical systems. The stochastic nature of our method makes it more robust against noise, including accidental edges and small spurious clusters. We demonstrate the superiority of our algorithm using an example with 3 spiraling bands and a lot of noise. 1 Introduction Clustering algorithms can be divided into two categories: those that require a vectorial representationof the data, and those which use only pairwise representation. In the former case, every data item must be represented as a vector in a real normed space, while in the second case only pairwise relations of similarity or dissimilarity areused.


Kernel PCA and De-Noising in Feature Spaces

Neural Information Processing Systems

Kernel PCA as a nonlinear feature extractor has proven powerful as a preprocessing step for classification algorithms. But it can also be considered as a natural generalization of linear principal component analysis. This gives rise to the question how to use nonlinear features for data compression, reconstruction, and de-noising, applications common in linear PCA. This is a nontrivial task, as the results provided by kernel PCA live in some high dimensional feature space and need not have pre-images in input space. This work presents ideas for finding approximate pre-images, focusing on Gaussian kernels, and shows experimental results using these pre-images in data reconstruction and de-noising on toy examples as well as on real world data.


Vertex Identification in High Energy Physics Experiments

Neural Information Processing Systems

In High Energy Physics experiments one has to sort through a high flux of events, at a rate of tens of MHz, and select the few that are of interest. One of the key factors in making this decision is the location of the vertex where the interaction, that led to the event, took place. Here we present a novel solution to the problem of finding the location of the vertex, based on two feedforward neural networks with fixed architectures, whose parameters are chosen so as to obtain a high accuracy. The system is tested on simulated data sets, and is shown to perform better than conventional algorithms. 1 Introduction An event in High Energy Physics (HEP) is the experimental result of an interaction during the collision of particles in an accelerator. The result of this interaction is the production of tens of particles, each of which is ejected in a different direction and energy. Due to the quantum mechanical effects involved, the events differ from one another in the number of particles produced, the types of particles, and their energies. The trajectories of produced particles are detected by a very large and sophisticated detector.


Analyzing and Visualizing Single-Trial Event-Related Potentials

Neural Information Processing Systems

Event-related potentials (ERPs), are portions of electroencephalographic (EEG) recordings that are both time-and phase-locked to experimental events. ERPs are usually averaged to increase their signal/noise ratio relative to non-phase locked EEG activity, regardless of the fact that response activity in single epochs may vary widely in time course and scalp distribution. This study applies a linear decomposition tool, Independent Component Analysis (ICA) [1], to multichannel single-trial EEG records to derive spatial filters that decompose single-trial EEG epochs into a sum of temporally independent and spatially fixed components arising from distinct or overlapping brain or extra-brain networks. Our results on normal and autistic subjects show that ICA can separate artifactual, stimulus-locked, response-locked, and.