Plotting

 Country


A Neural Network Model of 3-D Lightness Perception

Neural Information Processing Systems

A neural network model of 3-D lightness perception is presented which builds upon the FACADE Theory Boundary Contour System/Feature ContourSystem of Grossberg and colleagues. Early ratio encoding by retinal ganglion neurons as well as psychophysical resultson constancy across different backgrounds (background constancy) are used to provide functional constraints to the theory and suggest a contrast negation hypothesis which states that ratio measures between coplanar regions are given more weight in the determination of lightness of the respective regions.


Optimal Asset Allocation using Adaptive Dynamic Programming

Neural Information Processing Systems

Ralph Neuneier* Siemens AG, Corporate Research and Development Otto-Hahn-Ring 6, D-81730 Munchen, Germany Abstract In recent years, the interest of investors has shifted to computerized assetallocation (portfolio management) to exploit the growing dynamics of the capital markets. In this paper, asset allocation is formalized as a Markovian Decision Problem which can be optimized byapplying dynamic programming or reinforcement learning based algorithms. Using an artificial exchange rate, the asset allocation strategyoptimized with reinforcement learning (Q-Learning) is shown to be equivalent to a policy computed by dynamic programming. Theapproach is then tested on the task to invest liquid capital in the German stock market. Here, neural networks are used as value function approximators.


Boosting Decision Trees

Neural Information Processing Systems

We introduce a constructive, incremental learning system for regression problems that models data by means of locally linear experts. In contrast to other approaches, the experts are trained independently and do not compete for data during learning. Only when a prediction for a query is required do the experts cooperate by blending their individual predictions. Eachexpert is trained by minimizing a penalized local cross validation errorusing second order methods. In this way, an expert is able to find a local distance metric by adjusting the size and shape of the receptive fieldin which its predictions are valid, and also to detect relevant input features by adjusting its bias on the importance of individual input dimensions. We derive asymptotic results for our method. In a variety of simulations the properties of the algorithm are demonstrated with respect to interference, learning speed, prediction accuracy, feature detection, and task oriented incremental learning.


Some results on convergent unlearning algorithm

Neural Information Processing Systems

In the past years the unsupervised learning schemes arose strong interest among researchers but for the time being a little is known about underlying learning mechanisms, aswell as still less rigorous results like convergence theorems were obtained in this field. One of promising concepts along this line is so called "unlearning" for the Hopfield-type neural networks (Hopfield et ai, 1983, van Hemmen & Klemmer, 1992,Wimbauer et ai, 1994). Elaborating that elegant ideas the convergent unlearning algorithm has recently been proposed (Plakhov & Semenov, 1994), executing withoutpatterns presentation. It is aimed at to correct initial Hebbian connectivity in order to provide extensive storage of arbitrary correlated data. This algorithm is stated as follows. Pick up at iteration step m, m 0,1,2, ... a random network state s(m)


How Perception Guides Production in Birdsong Learning

Neural Information Processing Systems

The passeriformes or songbirds make up more than half of all bird species and are divided into two groups: the os cines which learn their songs and sub-oscines which do not. Oscines raised in isolation sing degraded species typical songs similar to wild song. Deafened oscines sing completely degraded songs (Konishi, 1965), while deafened sub-oscines develop normal songs (Kroodsma and Konishi, 1991) indicating that auditory feedback is crucial in oscine song learning. Innate structures in the bird brain regulate song learning.


Learning the Structure of Similarity

Neural Information Processing Systems

The additive clustering (ADCL US) model (Shepard & Arabie, 1979) treats the similarity of two stimuli as a weighted additive measure of their common features. Inspired by recent work in unsupervised learning with multiple cause models, we propose anew, statistically well-motivated algorithm for discovering the structure of natural stimulus classes using the ADCLUS model, which promises substantial gainsin conceptual simplicity, practical efficiency, and solution quality over earlier efforts.


A Practical Monte Carlo Implementation of Bayesian Learning

Neural Information Processing Systems

A practical method for Bayesian training of feed-forward neural networks using sophisticated Monte Carlo methods is presented and evaluated. In reasonably small amounts of computer time this approach outperforms other state-of-the-art methods on 5 datalimited tasksfrom real world domains. 1 INTRODUCTION Bayesian learning uses a prior on model parameters, combines this with information from a training set, and then integrates over the resulting posterior to make predictions. Withthis approach, we can use large networks without fear of overfitting, allowing us to capture more structure in the data, thus improving prediction accuracy andeliminating the tedious search (often performed using cross validation) for the model complexity that optimises the bias/variance tradeoff. In this approach the size of the model is limited only by computational considerations. The application of Bayesian learning to neural networks has been pioneered by MacKay (1992), who uses a Gaussian approximation to the posterior weight distribution.


Parallel Optimization of Motion Controllers via Policy Iteration

Neural Information Processing Systems

This paper describes a policy iteration algorithm for optimizing the performance of a harmonic function-based controller with respect to a user-defined index. Value functions are represented as potential distributionsover the problem domain, being control policies represented as gradient fields over the same domain. All intermediate policiesare intrinsically safe, i.e. collisions are not promoted during the adaptation process. The algorithm has efficient implementation inparallel SIMD architectures. One potential application - travel distance minimization - illustrates its usefulness.


EM Optimization of Latent-Variable Density Models

Neural Information Processing Systems

There is currently considerable interest in developing general nonlinear densitymodels based on latent, or hidden, variables. Such models have the ability to discover the presence of a relatively small number of underlying'causes' which, acting in combination, give rise to the apparent complexity of the observed data set. Unfortunately, totrain such models generally requires large computational effort. In this paper we introduce a novel latent variable algorithm which retains the general nonlinear capabilities of previous models but which uses a training procedure based on the EM algorithm. We demonstrate the performance of the model on a toy problem and on data from flow diagnostics for a multiphase oil pipeline.


Modeling Interactions of the Rat's Place and Head Direction Systems

Neural Information Processing Systems

We have developed a computational theory of rodent navigation that includes analogs of the place cell system, the head direction system, and path integration. In this paper we present simulation results showing how interactions between the place and head direction systems can account for recent observations about hippocampal place cell responses to doubling and/or rotation of cue cards in a cylindrical arena (Sharp et at.,