Goto

Collaborating Authors

 Country


ℓ₀-norm Minimization for Basis Selection

Neural Information Processing Systems

Unfortunately, the required optimization problem is often intractable because there is a combinatorial increase in the number of local minima as the number of candidate basis vectors increases.



Making Latin Manuscripts Searchable using gHMM's

Neural Information Processing Systems

We describe a method that can make a scanned, handwritten mediaeval latin manuscript accessible to full text search. A generalized HMM is fitted, using transcribed latin to obtain a transition model and one example eachof 22 letters to obtain an emission model. We show results for unigram, bigram and trigram models.



Support Vector Classification with Input Data Uncertainty

Neural Information Processing Systems

This paper investigates a new learning model in which the input data is corrupted with noise. We present a general statistical framework to tackle this problem. Based on the statistical reasoning, we propose a novel formulation of support vector classification, which allows uncertainty ininput data. We derive an intuitive geometric interpretation of the proposed formulation, and develop algorithms to efficiently solve it. Empirical results are included to show that the newly formed method is superior to the standard SVM for problems with noisy input.


New Criteria and a New Algorithm for Learning in Multi-Agent Systems

Neural Information Processing Systems

We propose a new set of criteria for learning algorithms in multi-agent systems, one that is more stringent and (we argue) better justified than previous proposed criteria. Our criteria, which apply most straightforwardly inrepeated games with average rewards, consist of three requirements: (a) against a specified class of opponents (this class is a parameter of the criterion) the algorithm yield a payoff that approaches the payoff of the best response, (b) against other opponents the algorithm's payoff at least approach (and possibly exceed) the security level payoff (or maximin value),and (c) subject to these requirements, the algorithm achieve a close to optimal payoff in self-play. We furthermore require that these average payoffs be achieved quickly. We then present a novel algorithm, and show that it meets these new criteria for a particular parameter class, the class of stationary opponents. Finally, we show that the algorithm is effective not only in theory, but also empirically. Using a recently introduced comprehensive game theoretic test suite, we show that the algorithm almost universally outperforms previous learning algorithms.


Spike-timing Dependent Plasticity and Mutual Information Maximization for a Spiking Neuron Model

Neural Information Processing Systems

We derive an optimal learning rule in the sense of mutual information maximization for a spiking neuron model. Under the assumption of small fluctuations of the input, we find a spike-timing dependent plasticity (STDP)function which depends on the time course of excitatory postsynaptic potentials (EPSPs) and the autocorrelation function of the postsynaptic neuron. We show that the STDP function has both positive and negative phases. The positive phase is related to the shape of the EPSP while the negative phase is controlled by neuronal refractoriness.


Kernels for Multi--task Learning

Neural Information Processing Systems

This paper provides a foundation for multi-task learning using reproducing kernel Hilbertspaces of vector-valued functions. In this setting, the kernel is a matrix-valued function. Some explicit examples will be described which go beyond ourearlier results in [7]. In particular, we characterize classes of matrix-valued kernels which are linear and are of the dot product or the translation invariant type.We discuss how these kernels can be used to model relations between the tasks and present linear multi-task learning algorithms. Finally, we present a novel proof of the representer theorem for a minimizer of a regularization functional whichis based on the notion of minimal norm interpolation.


Kernel Methods for Implicit Surface Modeling

Neural Information Processing Systems

We describe methods for computing an implicit model of a hypersurface that is given only by a finite sampling. The methods work by mapping the sample points into a reproducing kernel Hilbert space and then determining regionsin terms of hyperplanes.


Maximal Margin Labeling for Multi-Topic Text Categorization

Neural Information Processing Systems

In this paper, we address the problem of statistical learning for multitopic textcategorization (MTC), whose goal is to choose all relevant topics (a label) from a given set of topics. The proposed algorithm, Maximal MarginLabeling (MML), treats all possible labels as independent classes and learns a multi-class classifier on the induced multi-class categorization problem.To cope with the data sparseness caused by the huge number of possible labels, MML combines some prior knowledge about label prototypes and a maximal margin criterion in a novel way. Experiments withmulti-topic Web pages show that MML outperforms existing learning algorithms including Support Vector Machines.