Goto

Collaborating Authors

Simulation of Optimal Movements Using the Minimum-Muscle-Tension-Change Model

Neural Information Processing Systems

This work discusses various optimization techniques which were proposed in models for controlling arm movements. In particular, the minimum-muscle-tension-change model is investigated. A dynamic simulator of the monkey's arm, including seventeen single and double joint muscles, is utilized to generate horizontal hand movements. The hand trajectories produced by this algorithm are discussed.


English Alphabet Recognition with Telephone Speech

Neural Information Processing Systems

Mark Fanty, Ronald A. Cole and Krist Roginski Center for Spoken Language Understanding Oregon Graduate Institute of Science and Technology 19600 N.W. Von Neumann Dr., Beaverton, OR 97006 Abstract A recognition system is reported which recognizes names spelled over the telephone with brief pauses between letters. The system uses separate neural networks to locate segment boundaries and classify letters. The letter scores are then used to search a database of names to find the best scoring name. The speaker-independent classification rate for spoken letters is89%. The system retrieves the correct name, spelled with pauses between letters, 91 % of the time from a database of 50,000 names. 1 INTRODUCTION The English alphabet is difficult to recognize automatically because many letters sound alike; e.g., BID, PIT, VIZ and F IS.


A Parallel Analog CCD/CMOS Signal Processor

Neural Information Processing Systems

A CCO based signal processing IC that computes a fully parallel single quadrant vector-matrix multiplication has been designed and fabricated with a 2j..un CCO/CMOS process. The device incorporates an array of Charge Coupled Devices (CCO) which hold an analog matrix of charge encoding the matrix elements. Input vectors are digital with 1 - 8 bit accuracy.


Hierarchies of adaptive experts

Neural Information Processing Systems

Another class of nonlinear algorithms, exemplified by CART (Breiman, Friedman, Olshen, & Stone, 1984) and MARS (Friedman, 1990), generalizes classicaltechniques by partitioning the training data into non-overlapping regions and fitting separate models in each of the regions. These two classes of algorithms extendlinear techniques in essentially independent directions, thus it seems worthwhile to investigate algorithms that incorporate aspects of both approaches to model estimation. Such algorithms would be related to CART and MARS as multilayer neural networks are related to linear statistical techniques.



Active Exploration in Dynamic Environments

Neural Information Processing Systems

Many real-valued connectionist approaches to learning control realize exploration by randomness inaction selection. This might be disadvantageous when costs are assigned to "negative experiences" . The basic idea presented in this paper is to make an agent explore unknown regions in a more directed manner. This is achieved by a so-called competence map, which is trained to predict the controller's accuracy, and is used for guiding exploration. Based on this, a bistable system enables smoothly switching attention between two behaviors - exploration and exploitation - depending on expected costsand knowledge gain. The appropriateness of this method is demonstrated by a simple robot navigation task.


Modeling Applications with the Focused Gamma Net

Neural Information Processing Systems

The focused gamma network is proposed as one of the possible implementations of the gamma neural model. The focused gamma network is compared with the focused backpropagation network and TDNN for a time series prediction problem, and with ADALINE in a system identification problem.



Constant-Time Loading of Shallow 1-Dimensional Networks

Neural Information Processing Systems

The complexity of learning in shallow I-Dimensional neural networks has been shown elsewhere to be linear in the size of the network. However, when the network has a huge number of units (as cortex has) even linear time might be unacceptable. Furthermore, the algorithm that was given to achieve this time was based on a single serial processor and was biologically implausible. In this work we consider the more natural parallel model of processing and demonstrate an expected-time complexity that is constant (i.e.


Bayesian Model Comparison and Backprop Nets

Neural Information Processing Systems

The Bayesian model comparison framework is reviewed, and the Bayesian Occam's razor is explained. This framework can be applied to feedforward networks, making possible (1) objective comparisons between solutions using alternative network architectures; (2) objective choice of magnitude and type of weight decay terms; (3) quantified estimates of the error bars on network parameters and on network output. The framework also generates ameasure of the effective number of parameters determined by the data. The relationship of Bayesian model comparison to recent work on prediction ofgeneralisation ability (Guyon et al., 1992, Moody, 1992) is discussed.