Goto

Collaborating Authors

 Technology


The Bias-Variance Tradeoff and the Randomized GACV

Neural Information Processing Systems

We propose a new in-sample cross validation based method (randomized GACV) for choosing smoothing or bandwidth parameters that govern the bias-variance or fit-complexity tradeoff in'soft' classification. Soft classification refersto a learning procedure which estimates the probability that an example with a given attribute vector is in class 1 vs class O. The target for optimizing the the tradeoff is the Kullback-Liebler distance between the estimated probability distribution and the'true' probability distribution,representing knowledge of an infinite population.


A Theory of Mean Field Approximation

Neural Information Processing Systems

I present a theory of mean field approximation based on information geometry. Thistheory includes in a consistent way the naive mean field approximation, as well as the TAP approach and the linear response theorem instatistical physics, giving clear information-theoretic interpretations to them. 1 INTRODUCTION Many problems of neural networks, such as learning and pattern recognition, can be cast into a framework of statistical estimation problem. How difficult it is to solve a particular problem depends on a statistical model one employs in solving the problem. For Boltzmann machines[ 1] for example, it is computationally very hard to evaluate expectations of state variables from the model parameters. Mean field approximation[2], which is originated in statistical physics, has been frequently used in practical situations in order to circumvent this difficulty.


On the Optimality of Incremental Neural Network Algorithms

Neural Information Processing Systems

We study the approximation of functions by two-layer feedforward neural networks,focusing on incremental algorithms which greedily add units, estimating single unit parameters at each stage. As opposed to standard algorithms for fixed architectures, the optimization at each stage is performed over a small number of parameters, mitigating many of the difficult numerical problems inherent in high-dimensional nonlinear optimization. Weestablish upper bounds on the error incurred by the algorithm, when approximating functions from the Sobolev class, thereby extending previous results which only provided rates of convergence for functions in certain convex hulls of functional spaces. By comparing our results to recently derived lower bounds, we show that the greedy algorithms arenearly optimal. Combined with estimation error results for greedy algorithms, a strong case can be made for this type of approach.



Computation of Smooth Optical Flow in a Feedback Connected Analog Network

Neural Information Processing Systems

In 1986, Tanner and Mead [1] implemented an interesting constraint satisfaction circuitfor global motion sensing in aVLSI. We report here a new and improved aVLSI implementation that provides smooth optical flow as well as global motion in a two dimensional visual field. The computation ofoptical flow is an ill-posed problem, which expresses itself as the aperture problem. However, the optical flow can be estimated by the use of regularization methods, in which additional constraints are introduced interms of a global energy functional that must be minimized. We show how the algorithmic constraints of Hom and Schunck [2] on computing smoothoptical flow can be mapped onto the physical constraints of an equivalent electronic network.


Bayesian Modeling of Facial Similarity

Neural Information Processing Systems

In previous work [6, 9, 10], we advanced a new technique for direct visual matching of images for the purposes of face recognition and image retrieval, using a probabilistic measure of similarity based primarily on a Bayesian (MAP) analysis of image differences, leadingto a "dual" basis similar to eigenfaces [13]. The performance advantage of this probabilistic matching technique over standard Euclidean nearest-neighbor eigenface matching was recently demonstrated using results from DARPA's 1996 "FERET" face recognition competition, in which this probabilistic matching algorithm was found to be the top performer. We have further developed a simple method of replacing the costly compution of nonlinear (online) Bayesian similarity measures by the relatively inexpensive computation of linear (offline) subspace projections and simple (online) Euclidean norms, thus resulting in a significant computational speedup for implementation with very large image databases as typically encountered in real-world applications.


Kernel PCA and De-Noising in Feature Spaces

Neural Information Processing Systems

Kernel PCA as a nonlinear feature extractor has proven powerful as a preprocessing step for classification algorithms. But it can also be considered asa natural generalization of linear principal component analysis. This gives rise to the question how to use nonlinear features for data compression, reconstruction, and de-noising, applications common in linear PCA. This is a nontrivial task, as the results provided by kernel PCAlive in some high dimensional feature space and need not have pre-images in input space. This work presents ideas for finding approximate pre-images,focusing on Gaussian kernels, and shows experimental results using these pre-images in data reconstruction and de-noising on toy examples as well as on real world data.


Active Noise Canceling Using Analog Neuro-Chip with On-Chip Learning Capability

Neural Information Processing Systems

A modular analogue neuro-chip set with on-chip learning capability is developed for active noise canceling. The analogue neuro-chip set incorporates the error backpropagation learning rule for practical applications, and allows pinto-pin interconnections for multi-chip boards. The developed neuro-board demonstrated active noise canceling without any digital signal processor. Multi-path fading of acoustic channels, random noise, and nonlinear distortion of the loud speaker are compensated by the adaptive learning circuits of the neuro-chips. Experimental results are reported for cancellation of car noise in real time.


Viewing Classifier Systems as Model Free Learning in POMDPs

Neural Information Processing Systems

Classifier systems are now viewed disappointing because of their problems suchas the rule strength vs rule set performance problem and the credit assignment problem. In order to solve the problems, we have developed ahybrid classifier system: GLS (Generalization Learning System). In designing GLS, we view CSs as model free learning in POMDPs and take a hybrid approach to finding the best generalization, given the total number of rules. GLS uses the policy improvement procedure by Jaakkola et al. for an locally optimal stochastic policy when a set of rule conditions is given. GLS uses GA to search for the best set of rule conditions. 1 INTRODUCTION Classifier systems (CSs) (Holland 1986) have been among the most used in reinforcement learning.


A Model for Associative Multiplication

Neural Information Processing Systems

Despite the fact that mental arithmetic is based on only a few hundred basicfacts and some simple algorithms, humans have a difficult time mastering the subject, and even experienced individuals make mistakes. Associative multiplication, the process of doing multiplication by memory without the use of rules or algorithms, is especially problematic.