Goto

Collaborating Authors

 Country


Teaching Artificial Neural Systems to Drive: Manual Training Techniques for Autonomous Systems

Neural Information Processing Systems

To demonstrate these methods we have trained an ANS network to drive a vehicle through simulated rreeway traffic. I ntJooducticn Computational systems employing fine grained parallelism are revolutionizing the way we approach a number or long standing problems involving pattern recognition and cognitive processing. The field spans a wide variety or computational networks, rrom constructs emulating neural runctions, to more crystalline configurations that resemble systolic arrays. Several titles are used to describe this broad area or research, we use the term artificial neural systems (ANS). Our concern in this work is the use or ANS ror manually training certain types or autonomous systems where the desired rules of behavior are difficult to rormulate. Artificial neural systems consist of a number or processing elements interconnected in a weighted, user-specified fashion, the interconnection weights acting as memory ror the system. Each processing element calculatE', an output value based on the weighted sum or its inputs. In addition, the input data is correlated with the output or desired output (specified by an instructive agent) in a training rule that is used to adjust the interconnection weights.


Analysis of Distributed Representation of Constituent Structure in Connectionist Systems

Neural Information Processing Systems

A general method, the tensor product representation, is described for the distributed representation of value/variable bindings. The method allows the fully distributed representation of symbolic structures: the roles in the structures, as well as the fillers for those roles, can be arbitrarily non-local. Fully and partially localized special cases reduce to existing cases of connectionist representations of structured data; the tensor product representation generalizes these and the few existing examples of fuUy distributed representations of structures. The representation saturates gracefully as larger structures are represented; it penn its recursive construction of complex representations from simpler ones; it respects the independence of the capacities to generate and maintain multiple bindings in parallel; it extends naturally to continuous structures and continuous representational patterns; it pennits values to also serve as variables; it enables analysis of the interference of symbolic structures stored in associative memories; and it leads to characterization of optimal distributed representations of roles and a recirculation algorithm for learning them. Introduction Any model of complex infonnation processing in networks of simple processors must solve the problem of representing complex structures over network elements. Connectionist models of realistic natural language processing, for example, must employ computationally adequate representations of complex sentences. Many connectionists feel that to develop connectionist systems with the computational power required by complex tasks, distributed representations must be used: an individual processing unit must participate in the representation of multiple items, and each item must be represented as a pattern of activity of multiple processors. Connectionist models have used more or less distributed representations of more or less complex structures, but little if any general analysis of the problem of distributed representation of complex infonnation has been carried out This paper reports results of an analysis of a general method called the tensor product representation.


Simulations Suggest Information Processing Roles for the Diverse Currents in Hippocampal Neurons

Neural Information Processing Systems

ABSTRACT A computer model of the hippocampal pyramidal cell (HPC) is described which integrates data from a variety of sources in order to develop a consistent description for this cell type. The model presently includes descriptions of eleven nonlinear somatic currents of the HPC, and the electrotonic structure of the neuron is modelled with a soma/short-cable approximation. Model simulations qualitatively or quantitatively reproduce a wide range of somatic electrical behavior i HPCs, and demonstrate possible roles for the various currents in information processing. There are several substrates for neuronal computation, including connectivity, synapses, morphometries of dendritic trees, linear parameters of cell membrane, as well as nonlinear, time-varying membrane conductances, also referred to as currents or channels. In the classical description of neuronal function, the contribution of membrane channels is constrained to that of generating the action potential, setting firing threshold, and establishing the relationship between (steady-state) stimulus intensity and firing frequency.


A Neural-Network Solution to the Concentrator Assignment Problem

Neural Information Processing Systems

A NEURAL-NETWORK SOLUTION TO THE CONCENTRATOR ASSIGNNlENT PROBLEM Gene A. Tagliarini Edward W. Page Department of Computer Science, Clemson University, Clemson, SC 29634-1906 ABSTRACT Networks of simple analog processors having neuron-like properties have been employed to compute good solutions to a variety of optimization problems. This paper presents a neural-net solution to a resource allocation problem that arises in providing local access to the backbone of a wide-area communication network. The problem is described in terms of an energy function that can be mapped onto an analog computational network. Simulation results characterizing the performance of the neural computation are also presented. INTRODUCTION This paper presents a neural-network solution to a resource allocation problem that arises in providing access to the backbone of a communication network. 1 In the field of operations research, this problem was first known as the warehouse location problem and heuristics for finding feasible, suboptimal solutions have been developed previously.2.


Self-Organization of Associative Database and Its Applications

Neural Information Processing Systems

Here, X is a finite or infinite set, and Y is another finite or infinite set. A learning machine observes any set of pairs (x, y) sampled randomly from X x Y. (X x Y means the Cartesian product of X and Y.) And, it computes some estimate j:


LEARNING BY STATE RECURRENCE DETECTION

Neural Information Processing Systems

LEARNING BY ST ATE RECURRENCE DETECfION Bruce E. Rosen, James M. Goodwint, and Jacques J. Vidal University of California, Los Angeles, Ca. 90024 ABSTRACT This research investigates a new technique for unsupervised learning of nonlinear control problems. The approach is applied both to Michie and Chambers BOXES algorithm and to Barto, Sutton and Anderson's extension, the ASE/ACE system, and has significantly improved the convergence rate of stochastically based learning automata. Recurrence learning is a new nonlinear reward-penalty algorithm. It exploits information found during learning trials to reinforce decisions resulting in the recurrence of nonfailing states. Recurrence learning applies positive reinforcement during the exploration of the search space, whereas in the BOXES or ASE algorithms, only negative weight reinforcement is applied, and then only on failure. Simulation results show that the added information from recurrence learning increases the learning rate. Our empirical results show that recurrence learning is faster than both basic failure driven learning and failure prediction methods. Although recurrence learning has only been tested in failure driven experiments, there are goal directed learning applications where detection of recurring oscillations may provide useful information that reduces the learning time by applying negative, instead of positive reinforcement.


Stability Results for Neural Networks

Neural Information Processing Systems

Department of Electrical and Computer Engineering University of Notre Dame Notre Dame, IN 46556 ABSTRACT In the present paper we survey and utilize results from the qualitative theory of large scale interconnected dynamical systems in order to develop a qualitative theory for the Hopfield model of neural networks. In our approach we view such networks as an interconnection of many single neurons. Our results are phrased in terms of the qualitative properties of the individual neurons and in terms of the properties of the interconnecting structure of the neural networks. Aspects of neural networks which we address include asymptotic stability, exponential stability, and instability of an equilibrium; estimates of trajectory bounds; estimates of the domain of attraction of an asymptotically stable equilibrium; and stability of neural networks under structural perturbations. INTRODUCTION In recent years, neural networks have attracted considerable attention as candidates for novel computational systemsl-3.


The Sigmoid Nonlinearity in Prepyriform Cortex

Neural Information Processing Systems

THE SIGMOID NONLINEARITY IN PREPYRIFORM CORTEX Frank H. Eeckman University of California, Berkeley, CA 94720 ABSlRACT We report a study ยทon the relationship between EEG amplitude values and unit spike output in the prepyriform cortex of awake and motivated rats. This relationship takes the form of a sigmoid curve, that describes normalized pulse-output for normalized wave input. The curve is fitted using nonlinear regression and is described by its slope and maximum value. Measurements were made for both excitatory and inhibitory neurons in the cortex. These neurons are known to form a monosynaptic negative feedback loop. Both classes of cells can be described by the same parameters.


Network Generality, Training Required, and Precision Required

Neural Information Processing Systems

We show how to estimate (1) the number of functions that can be implemented by a particular network architecture, (2) how much analog precision is needed in the connections in the network, and (3) the number of training examples the network must see before it can be expected to form reliable generalizations.


Learning on a General Network

Neural Information Processing Systems

The network model considered consists of interconnected groups of neurons, where each group could be fully interconnected (it could have feedback connections, with possibly asymmetric weights), but no loops between the groups are allowed. A stochastic descent algorithm is applied, under a certain inequality constraint on each intragroup weight matrix which ensures for the network to possess a unique equilibrium state for every input. Introduction It has been shown in the last few years that large networks of interconnected "neuron" -like elemp.nts