Not enough data to create a plot.
Try a different view from the menu above.
North America
Hierarchical Transformation of Space in the Visual System
Pouget, Alexandre, Fisher, Stephen A., Sejnowski, Terrence J.
Neurons encoding simple visual features in area VI such as orientation, direction of motion and color are organized in retinotopic maps. However, recentphysiological experiments have shown that the responses of many neurons in VI and other cortical areas are modulated by the direction ofgaze. We have developed a neural network model of the visual cortex to explore the hypothesis that visual features are encoded in headcentered coordinatesat early stages of visual processing. New experiments are suggested for testing this hypothesis using electrical stimulations and psychophysical observations.
Connectionist Optimisation of Tied Mixture Hidden Markov Models
Renals, Steve, Morgan, Nelson, Bourlard, Hervรฉ, Franco, Horacio, Cohen, Michael
Horacio Franco Michael Cohen SRI International Menlo Park CA 94025 USA Issues relating to the estimation of hidden Markov model (HMM) local probabilities are discussed. In particular we note the isomorphism of radial basisfunctions (RBF) networks to tied mixture density modellingj additionally we highlight the differences between these methods arising from the different training criteria employed. We present a method in which connectionist training can be modified to resolve these differences and discuss some preliminary experiments. Finally, we discuss some outstanding problemswith discriminative training.
Improving the Performance of Radial Basis Function Networks by Learning Center Locations
Wettschereck, Dietrich, Dietterich, Thomas
Three methods for improving the performance of (gaussian) radial basis function (RBF) networks were tested on the NETtaik task. In RBF, a new example is classified by computing its Euclidean distance to a set of centers chosen by unsupervised methods. The application of supervised learning to learn a non-Euclidean distance metric was found to reduce the error rate of RBF networks, while supervised learning of each center's variance resultedin inferior performance. The best improvement in accuracy was achieved by networks called generalized radial basis function (GRBF) networks. In GRBF, the center locations are determined by supervised learning. After training on 1000 words, RBF classifies 56.5% of letters correct, while GRBF scores 73.4% letters correct (on a separate test set). From these and other experiments, we conclude that supervised learning of center locations can be very important for radial basis function learning.
Recurrent Networks and NARMA Modeling
Connor, Jerome, Atlas, Les E., Martin, Douglas R.
There exist large classes of time series, such as those with nonlinear moving average components, that are not well modeled by feedforward networks or linear models, but can be modeled by recurrent networks. We show that recurrent neural networks are a type of nonlinear autoregressive-moving average (N ARMA) model. Practical ability will be shown in the results of a competition sponsored by the Puget Sound Power and Light Company, where the recurrent networks gave the best performance on electric load forecasting. 1 Introduction This paper will concentrate on identifying types of time series for which a recurrent network provides a significantly better model, and corresponding prediction, than a feedforward network. Our main interest is in discrete time series that are parsimoniously modeledby a simple recurrent network, but for which, a feedforward neural network is highly non-parsimonious by virtue of requiring an infinite amount of past observations as input to achieve the same accuracy in prediction. Our approach is to consider predictive neural networks as stochastic models.
Principled Architecture Selection for Neural Networks: Application to Corporate Bond Rating Prediction
The notion of generalization ability can be defined precisely as the prediction risk,the expected performance of an estimator in predicting new observations. In this paper, we propose the prediction risk as a measure of the generalization ability of multi-layer perceptron networks and use it to select an optimal network architecture from a set of possible architectures. Wealso propose a heuristic search strategy to explore the space of possible architectures. The prediction risk is estimated from the available data; here we estimate the prediction risk by v-fold cross-validation and by asymptotic approximations of generalized cross-validation or Akaike's final prediction error. We apply the technique to the problem of predicting corporate bond ratings. This problem is very attractive as a case study, since it is characterized by the limited availability of the data and by the lack of a complete a priori model which could be used to impose a structure to the network architecture.
Recognition of Manipulated Objects by Motor Learning
We present two neural network controller learning schemes based on feedbackerror-learning and modular architecture for recognition and control of multiple manipulated objects. In the first scheme, a Gating Network is trained to acquire of a number of objects (or sets ofobject-specific representations for recognition objects). In the second scheme, an Estimation Network is trained to acquire function-specific, rather than object-specific, representations which directly estimate to identify manipulatedphysical parameters. Both recognition networks are trained objects using somatic and/or visual information. After learning, appropriate motor commands for manipulation of each object are issued by the control networks.