Information Technology
A Neural Network Model of 3-D Lightness Perception
Pessoa, Luiz, Ross, William D.
A neural network model of 3-D lightness perception is presented which builds upon the FACADE Theory Boundary Contour System/Feature ContourSystem of Grossberg and colleagues. Early ratio encoding by retinal ganglion neurons as well as psychophysical resultson constancy across different backgrounds (background constancy) are used to provide functional constraints to the theory and suggest a contrast negation hypothesis which states that ratio measures between coplanar regions are given more weight in the determination of lightness of the respective regions.
Optimal Asset Allocation using Adaptive Dynamic Programming
Ralph Neuneier* Siemens AG, Corporate Research and Development Otto-Hahn-Ring 6, D-81730 Munchen, Germany Abstract In recent years, the interest of investors has shifted to computerized assetallocation (portfolio management) to exploit the growing dynamics of the capital markets. In this paper, asset allocation is formalized as a Markovian Decision Problem which can be optimized byapplying dynamic programming or reinforcement learning based algorithms. Using an artificial exchange rate, the asset allocation strategyoptimized with reinforcement learning (Q-Learning) is shown to be equivalent to a policy computed by dynamic programming. Theapproach is then tested on the task to invest liquid capital in the German stock market. Here, neural networks are used as value function approximators.
Cholinergic suppression of transmission may allow combined associative memory function and self-organization in the neocortex
Hasselmo, Michael E., Cekic, Milos
Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback withself-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feedforward synapses).A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedforward connectivity.During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response.
Boosting Decision Trees
Drucker, Harris, Cortes, Corinna
We introduce a constructive, incremental learning system for regression problems that models data by means of locally linear experts. In contrast to other approaches, the experts are trained independently and do not compete for data during learning. Only when a prediction for a query is required do the experts cooperate by blending their individual predictions. Eachexpert is trained by minimizing a penalized local cross validation errorusing second order methods. In this way, an expert is able to find a local distance metric by adjusting the size and shape of the receptive fieldin which its predictions are valid, and also to detect relevant input features by adjusting its bias on the importance of individual input dimensions. We derive asymptotic results for our method. In a variety of simulations the properties of the algorithm are demonstrated with respect to interference, learning speed, prediction accuracy, feature detection, and task oriented incremental learning.
Some results on convergent unlearning algorithm
Semenov, Serguei A., Shuvalova, Irina B.
In the past years the unsupervised learning schemes arose strong interest among researchers but for the time being a little is known about underlying learning mechanisms, aswell as still less rigorous results like convergence theorems were obtained in this field. One of promising concepts along this line is so called "unlearning" for the Hopfield-type neural networks (Hopfield et ai, 1983, van Hemmen & Klemmer, 1992,Wimbauer et ai, 1994). Elaborating that elegant ideas the convergent unlearning algorithm has recently been proposed (Plakhov & Semenov, 1994), executing withoutpatterns presentation. It is aimed at to correct initial Hebbian connectivity in order to provide extensive storage of arbitrary correlated data. This algorithm is stated as follows. Pick up at iteration step m, m 0,1,2, ... a random network state s(m)
How Perception Guides Production in Birdsong Learning
The passeriformes or songbirds make up more than half of all bird species and are divided into two groups: the os cines which learn their songs and sub-oscines which do not. Oscines raised in isolation sing degraded species typical songs similar to wild song. Deafened oscines sing completely degraded songs (Konishi, 1965), while deafened sub-oscines develop normal songs (Kroodsma and Konishi, 1991) indicating that auditory feedback is crucial in oscine song learning. Innate structures in the bird brain regulate song learning.
Learning the Structure of Similarity
The additive clustering (ADCL US) model (Shepard & Arabie, 1979) treats the similarity of two stimuli as a weighted additive measure of their common features. Inspired by recent work in unsupervised learning with multiple cause models, we propose anew, statistically well-motivated algorithm for discovering the structure of natural stimulus classes using the ADCLUS model, which promises substantial gainsin conceptual simplicity, practical efficiency, and solution quality over earlier efforts.