Plotting

A Boundary Hunting Radial Basis Function Classifier which Allocates Centers Constructively

Neural Information Processing Systems

A new boundary hunting radial basis function (BH-RBF) classifier which allocates RBF centers constructively near class boundaries is described. This classifier creates complex decision boundaries only in regions where confusions occur and corresponding RBF outputs are similar. A predicted square error measure is used to determine how many centers to add and to determine when to stop adding centers. Two experiments are presented which demonstrate the advantages of the BH RBF classifier. One uses artificial data with two classes and two input features where each class contains four clusters but only one cluster is near a decision region boundary.


Physiologically Based Speech Synthesis

Neural Information Processing Systems

This study demonstrates a paradigm for modeling speech production basedon neural networks. Using physiological data from speech utterances, a neural network learns the forward dynamics relating motor commands to muscles and the ensuing articulator behavior that allows articulator trajectories to be generated from motor commands constrained by phoneme input strings and global performance parameters. From these movement trajectories, a second neuralnetwork generates PARCOR parameters that are then used to synthesize the speech acoustics.


Weight Space Probability Densities in Stochastic Learning: I. Dynamics and Equilibria

Neural Information Processing Systems

The ensemble dynamics of stochastic learning algorithms can be studied using theoretical techniques from statistical physics. We develop the equations of motion for the weight space probability densities for stochastic learning algorithms. We discuss equilibria in the diffusion approximation and provide expressions for special cases of the LMS algorithm. The equilibrium densities are not in general thermal (Gibbs) distributions in the objective function being minimized,but rather depend upon an effective potential that includes diffusion effects. Finally we present an exact analytical expression for the time evolution of the density for a learning algorithm withweight updates proportional to the sign of the gradient.


Attractor Neural Networks with Local Inhibition: from Statistical Physics to a Digitial Programmable Integrated Circuit

Neural Information Processing Systems

In particular the critical capacity of the network is increased as well as its capability to store correlated patterns. Chaotic dynamic behaviour(exponentially long transients) of the devices indicates theoverloading of the associative memory. An implementation based on a programmable logic device is here presented. A 16 neurons circuit is implemented whit a XILINK 4020 device. The peculiarity of this solution is the possibility to change parts of the project (weights, transfer function or the whole architecture) with a simple software download of the configuration into the XILINK chip. 1 INTRODUCTION Attractor Neural Networks endowed with local inhibitory feedbacks, have been shown to have interesting computational performances[I].



Recognition-based Segmentation of On-Line Hand-printed Words

Neural Information Processing Systems

The input strings consist of a timeordered sequenceof XY coordinates, punctuated by pen-lifts. The methods were designed to work in "run-on mode" where there is no constraint on the spacing between characters. While both methods use a neural network recognition engine and a graph-algorithmic post-processor, their approaches to segmentation are quite different. Thefirst method, which we call IN SEC (for input segmentation), usesa combination of heuristics to identify particular penlifts as tentative segmentation points. The second method, which we call OUTSEC (for output segmentation), relies on the empirically trainedrecognition engine for both recognizing characters and identifying relevant segmentation points. 1 INTRODUCTION We address the problem of writer independent recognition of hand-printed words from an 80,OOO-word English dictionary. Several levels of difficulty in the recognition of hand-printed words are illustrated in figure 1. The examples were extracted from our databases (table 1). Except in the cases of boxed or clearly spaced characters, segmenting characters independently of the recognition process yields poor recognition performance.This has motivated us to explore recognition-based segmentation techniques.



Predicting Complex Behavior in Sparse Asymmetric Networks

Neural Information Processing Systems

Recurrent networks of threshold elements have been studied intensively asassociative memories and pattern-recognition devices. While most research has concentrated on fully-connected symmetric networks.


Metamorphosis Networks: An Alternative to Constructive Models

Neural Information Processing Systems

Given a set oftraining examples, determining the appropriate number offree parameters is a challenging problem. Constructive learning algorithms attempt to solve this problem automatically by adding hidden units, and therefore free parameters, during learning. Weexplore an alternative class of algorithms-called metamorphosis algorithms-inwhich the number of units is fixed, but the number of free parameters gradually increases during learning. The architecture we investigate is composed of RBF units on a lattice, whichimposes flexible constraints on the parameters of the network. Virtues of this approach include variable subset selection, robustparameter selection, multiresolution processing, and interpolation of sparse training data.


Learning Sequential Tasks by Incrementally Adding Higher Orders

Neural Information Processing Systems

An incremental, higher-order, non-recurrent network combines two properties found to be useful for learning sequential tasks: higherorder connectionsand incremental introduction of new units. The network adds higher orders when needed by adding new units that dynamically modify connection weights. Since the new units modify theweights at the next time-step with information from the previous step, temporal tasks can be learned without the use of feedback, thereby greatly simplifying training. Furthermore, a theoretically unlimitednumber of units can be added to reach into the arbitrarily distant past.