Plotting

 North America


Hierarchical Transformation of Space in the Visual System

Neural Information Processing Systems

Neurons encoding simple visual features in area VI such as orientation, direction of motion and color are organized in retinotopic maps. However, recentphysiological experiments have shown that the responses of many neurons in VI and other cortical areas are modulated by the direction ofgaze. We have developed a neural network model of the visual cortex to explore the hypothesis that visual features are encoded in headcentered coordinatesat early stages of visual processing. New experiments are suggested for testing this hypothesis using electrical stimulations and psychophysical observations.


Learning Global Direct Inverse Kinematics

Neural Information Processing Systems

In general, control objectives such as the positioning and orienting of the endeffector arespecified with respect to task space coordinates; however, the manipulator is typica1ly controlled only in the configuration space.


Connectionist Optimisation of Tied Mixture Hidden Markov Models

Neural Information Processing Systems

Horacio Franco Michael Cohen SRI International Menlo Park CA 94025 USA Issues relating to the estimation of hidden Markov model (HMM) local probabilities are discussed. In particular we note the isomorphism of radial basisfunctions (RBF) networks to tied mixture density modellingj additionally we highlight the differences between these methods arising from the different training criteria employed. We present a method in which connectionist training can be modified to resolve these differences and discuss some preliminary experiments. Finally, we discuss some outstanding problemswith discriminative training.




Improving the Performance of Radial Basis Function Networks by Learning Center Locations

Neural Information Processing Systems

Three methods for improving the performance of (gaussian) radial basis function (RBF) networks were tested on the NETtaik task. In RBF, a new example is classified by computing its Euclidean distance to a set of centers chosen by unsupervised methods. The application of supervised learning to learn a non-Euclidean distance metric was found to reduce the error rate of RBF networks, while supervised learning of each center's variance resultedin inferior performance. The best improvement in accuracy was achieved by networks called generalized radial basis function (GRBF) networks. In GRBF, the center locations are determined by supervised learning. After training on 1000 words, RBF classifies 56.5% of letters correct, while GRBF scores 73.4% letters correct (on a separate test set). From these and other experiments, we conclude that supervised learning of center locations can be very important for radial basis function learning.


Recurrent Networks and NARMA Modeling

Neural Information Processing Systems

There exist large classes of time series, such as those with nonlinear moving average components, that are not well modeled by feedforward networks or linear models, but can be modeled by recurrent networks. We show that recurrent neural networks are a type of nonlinear autoregressive-moving average (N ARMA) model. Practical ability will be shown in the results of a competition sponsored by the Puget Sound Power and Light Company, where the recurrent networks gave the best performance on electric load forecasting. 1 Introduction This paper will concentrate on identifying types of time series for which a recurrent network provides a significantly better model, and corresponding prediction, than a feedforward network. Our main interest is in discrete time series that are parsimoniously modeledby a simple recurrent network, but for which, a feedforward neural network is highly non-parsimonious by virtue of requiring an infinite amount of past observations as input to achieve the same accuracy in prediction. Our approach is to consider predictive neural networks as stochastic models.


The Effective Number of Parameters: An Analysis of Generalization and Regularization in Nonlinear Learning Systems

Neural Information Processing Systems

We present an analysis of how the generalization performance (expected test set error) relates to the expected training set error for nonlinear learning systems,such as multilayer perceptrons and radial basis functions.


Principled Architecture Selection for Neural Networks: Application to Corporate Bond Rating Prediction

Neural Information Processing Systems

The notion of generalization ability can be defined precisely as the prediction risk,the expected performance of an estimator in predicting new observations. In this paper, we propose the prediction risk as a measure of the generalization ability of multi-layer perceptron networks and use it to select an optimal network architecture from a set of possible architectures. Wealso propose a heuristic search strategy to explore the space of possible architectures. The prediction risk is estimated from the available data; here we estimate the prediction risk by v-fold cross-validation and by asymptotic approximations of generalized cross-validation or Akaike's final prediction error. We apply the technique to the problem of predicting corporate bond ratings. This problem is very attractive as a case study, since it is characterized by the limited availability of the data and by the lack of a complete a priori model which could be used to impose a structure to the network architecture.


Recognition of Manipulated Objects by Motor Learning

Neural Information Processing Systems

We present two neural network controller learning schemes based on feedbackerror-learning and modular architecture for recognition and control of multiple manipulated objects. In the first scheme, a Gating Network is trained to acquire of a number of objects (or sets ofobject-specific representations for recognition objects). In the second scheme, an Estimation Network is trained to acquire function-specific, rather than object-specific, representations which directly estimate to identify manipulatedphysical parameters. Both recognition networks are trained objects using somatic and/or visual information. After learning, appropriate motor commands for manipulation of each object are issued by the control networks.