Goto

Collaborating Authors



Discriminability-Based Transfer between Neural Networks

Neural Information Processing Systems

Neural networks are usually trained from scratch, relying only on the training data for guidance. However, as more and more networks are trained for various tasks, it becomes reasonable to seek out methods that.


Global Regularization of Inverse Kinematics for Redundant Manipulators

Neural Information Processing Systems

When m n, we say that the manipulator has redundant degrees--of-freedom (dot). The inverse kinematics problem is the following: given a desired workspace location x, find joint variables 0 such that f(O) x. Even when the forward kinematics is known, 255 256 DeMers and Kreutz-Delgado the inverse kinematics for a manipulator is not generically solvable in closed form (Craig. 1986).


Diffusion Approximations for the Constant Learning Rate Backpropagation Algorithm and Resistence to Local Minima

Neural Information Processing Systems

E (0,00), remains in spite of many real (and 459 460 Finnoff imagined)deficiencies the most widely used network training algorithm, and a vast body of literature documents its general applicability and robustness. In this paper we will draw on the highly developed literature of stochastic approximation theory todemonstrate several asymptotic properties of simple backpropagation.


Extended Regularization Methods for Nonconvergent Model Selection

Neural Information Processing Systems

Rep. Germany Abstract Many techniques for model selection in the field of neural networks correspond to well established statistical methods. The method of'stopped training', on the other hand, in which an oversized network is trained until the error on a further validation set of examples deteriorates,then training is stopped, is a true innovation, since model selection doesn't require convergence of the training process. Inthis paper we show that this performance can be significantly enhanced by extending the'nonconvergent model selection method' of stopped training to include dynamic topology modifications (dynamic weight pruning) and modified complexity penalty term methods in which the weighting of the penalty term is adjusted during the training process. 1 INTRODUCTION One of the central topics in the field of neural networks is that of model selection. Both the theoretical and practical side of this have been intensively investigated and a vast array of methods have been suggested to perform this task. A widely used class of techniques starts by choosing an'oversized' network architecture then either removing redundant elements based on some measure of saliency (pruning), adding a further term to the cost function penalizing complexity (penalty terms), and finally, observing the error on a further validation set of examples, then stopping training as soon as this performance begins to deteriorate (stopped training).


Topography and Ocular Dominance with Positive Correlations

Neural Information Processing Systems

This is motivated by experimental evidencethat these phenomena may be subserved by the same mechanisms. An important aspect of this model is that ocular dominance segregationcan occur when input activity is both distributed, and positively correlated between the eyes. This allows investigation of the dependence of the pattern of ocular dominance stripes on the degree of correlation between the eyes: it is found that increasing correlation leads to narrower stripes. Experiments are suggested to test whether such behaviour occursin the natural system.



Assessing and Improving Neural Network Predictions by the Bootstrap Algorithm

Neural Information Processing Systems

The bootstrap method offers an computation intensive alternative to estimate the predictive distribution for a neural network even if the analytic derivation is intractable. Theavailable asymptotic results show that it is valid for a large number of linear, nonlinear and even nonparametric regression problems. It has the potential tomodel the distribution of estimators to a higher precision than the usual normal asymptotics. It even may be valid if the normal asymptotics fail. However, the theoretical properties of bootstrap procedures for neural networks - especially nonlinear models - have to be investigated more comprehensively.


Learning Control Under Extreme Uncertainty

Neural Information Processing Systems

A peg-in-hole insertion task is used as an example to illustrate the utility of direct associative reinforcement learning methods for learning control under real-world conditions of uncertainty and noise. Task complexity due to the use of an unchamfered hole and a clearance of less than 0.2mm is compounded by the presence of positional uncertainty of magnitude exceeding 10 to 50 times the clearance. Despite this extreme degree of uncertainty, our results indicate that direct reinforcement learning can be used to learn a robust reactive control strategy that results in skillful peg-in-hole insertions.