Development of Orientation and Ocular Dominance Columns in Infant Macaques
Obermayer, Klaus, Kiorpes, Lynne, Blasdel, Gary G.
Maps of orientation preference and ocular dominance were recorded optically from the cortices of 5 infant macaque monkeys, ranging in age from 3.5 to 14 weeks. In agreement with previous observations, we found that basic features of orientation and ocular dominance maps, as well as correlations between them, are present and robust by 3.5 weeks of age. We did observe changes in the strength of ocular dominance signals, as well as in the spacing of ocular dominance bands, both of which increased steadily between 3.5 and 14 weeks of age. The latter finding suggests that the adult spacing of ocular dominance bands depends on cortical growth in neonatal animals. Since we found no corresponding increase in the spacing of orientation preferences, however, there is a possibility that the orientation preferences of some cells change as the cortical surface expands. Since correlations between the patterns of orientation selectivity and ocular dominance are present at an age, when the visual system is still immature, it seems more likely that their development may be an innate process and may not require extensive visual experience.
Locally Adaptive Nearest Neighbor Algorithms
Wettschereck, Dietrich, Dietterich, Thomas G.
Four versions of a k-nearest neighbor algorithm with locally adaptive k are introduced and compared to the basic k-nearest neighbor algorithm (kNN). Locally adaptive kNN algorithms choose the value of k that should be used to classify a query by consulting the results of cross-validation computations in the local neighborhood of the query. Local kNN methods are shown to perform similar to kNN in experiments with twelve commonly used data sets. Encouraging results in three constructed tasks show that local methods can significantly outperform kNN in specific applications. Local methods can be recommended for online learning and for applications where different regions of the input space are covered by patterns solving different sub-tasks.
Bayesian Backpropagation Over I-O Functions Rather Than Weights
The conventional Bayesian justification of backprop is that it finds the MAP weight vector. As this paper shows, to find the MAP io function instead one must add a correction tenn to backprop. That tenn biases one towards io functions with small description lengths, and in particular favors (some kinds of) feature-selection, pruning, and weight-sharing.
Computational Elements of the Adaptive Controller of the Human Arm
Shadmehr, Reza, Mussa-Ivaldi, Ferdinando A.
We consider the problem of how the CNS learns to control dynamics of a mechanical system. By using a paradigm where a subject's hand interacts with a virtual mechanical environment, we show that learning control is via composition of a model of the imposed dynamics. Some properties of the computational elements with which the CNS composes this model are inferred through the generalization capabilities of the subject outside the training data.
Learning Curves: Asymptotic Values and Rate of Convergence
Cortes, Corinna, Jackel, L. D., Solla, Sara A., Vapnik, Vladimir, Denker, John S.
Training classifiers on large databases is computationally demanding. It is desirable to develop efficient procedures for a reliable prediction of a classifier's suitability for implementing a given task, so that resources can be assigned to the most promising candidates or freed for exploring new classifier candidates. We propose such a practical and principled predictive method. Practical because it avoids the costly procedure of training poor classifiers on the whole training set, and principled because of its theoretical foundation. The effectiveness of the proposed procedure is demonstrated for both single-and multi-layer networks.
Classification of Multi-Spectral Pixels by the Binary Diamond Neural Network
Classification is widely used in the animal kingdom. Identifying an item as food is classification. Assigning words to objects, actions, feelings, and situations is classification. The purpose of this work is to introduce a new neural network, the Binary Diamond, which can be used as a general purpose classification tool. The design and operational mode of the Binary Diamond are influenced by observations of the underlying mechanisms that take place in human classification processes.
Dynamic Modulation of Neurons and Networks
Biological neurons have a variety of intrinsic properties because of the large number of voltage dependent currents that control their activity. Neuromodulatory substances modify both the balance of conductances that determine intrinsic properties and the strength of synapses. These mechanisms alter circuit dynamics, and suggest that functional circuits exist only in the modulatory environment in which they operate. 1 INTRODUCTION Many studies of artificial neural networks employ model neurons and synapses that are considerably simpler than their biological counterparts.
Clustering with a Domain-Specific Distance Measure
Gold, Steven, Mjolsness, Eric, Rangarajan, Anand
The distance measure and learning problem are formally described as nested objective functions. We derive an efficient algorithm by using optimization techniques that allow us to divide up the objective function into parts which may be minimized in distinct phases. The algorithm has accurately recreated 10 prototypes from a randomly generated sample database of 100 images consisting of 20 points each in 120 experiments. Finally, by incorporating permutation invariance in our distance measure, we have a technique that we may be able to apply to the clustering of graphs. Our goal is to develop measures which will enable the learning of objects with shape or structure. Acknowledgements This work has been supported by AFOSR grant F49620-92-J-0465 and ONR/DARPA grant N00014-92-J-4048.
Bounds on the complexity of recurrent neural network implementations of finite state machines
Although there are many ways to measure efficiency, we shall be concerned with node complexity, which as its name implies, is a calculation of the required number of nodes. Node complexity is a useful measure of efficiency since the amount of resources required to implement or even simulate a recurrent neural network is typically related to the number of nodes. Node complexity can also be related to the efficiency of learning algorithms for these networks and perhaps to their generalization ability as well. We shall focus on the node complexity of recurrent neural network implementations of finite state machines (FSMs) when the nodes of the network are restricted to threshold logic units.