Country
Model of a Biological Neuron as a Temporal Neural Network
Murphy, Sean D., Kairiss, Edward W.
A biological neuron can be viewed as a device that maps a multidimensional temporalevent signal (dendritic postsynaptic activations) into a unidimensional temporal event signal (action potentials). We have designed a network, the Spatio-Temporal Event Mapping (STEM) architecture, which can learn to perform this mapping for arbitrary biophysical modelsof neurons. Such a network appropriately trained, called a STEM cell, can be used in place of a conventional compartmental modelin simulations where only the transfer function is important, such as network simulations. The STEM cell offers advantages over compartmental models in terms of computational efficiency, analytical tractabili1ty, and as a framework for VLSI implementations of biological neurons.
The Electrotonic Transformation: a Tool for Relating Neuronal Form to Function
Carnevale, Nicholas T., Tsai, Kenneth Y., Claiborne, Brenda J., Brown, Thomas H.
The spatial distribution and time course of electrical signals in neurons have important theoretical and practical consequences. Because it is difficult to infer how neuronal form affects electrical signaling, we have developed a quantitative yet intuitive approach to the analysis of electrotonus. This approach transforms the architecture of the cell from anatomical to electrotonic space, using the logarithm of voltage attenuation as the distance metric. We describe the theory behind this approach and illustrate its use. 1 INTRODUCTION The fields of computational neuroscience and artificial neural nets have enjoyed a mutually beneficial exchange of ideas. This has been most evident at the network level, where concepts such as massive parallelism, lateral inhibition, and recurrent excitation have inspired both the analysis of brain circuits and the design of artificial neural net architectures.
Bayesian Query Construction for Neural Network Models
Paass, Gerhard, Kindermann, Jörg
If data collection is costly, there is much to be gained by actively selecting particularlyinformative data points in a sequential way. In a Bayesian decision-theoretic framework we develop a query selection criterionwhich explicitly takes into account the intended use of the model predictions. By Markov Chain Monte Carlo methods the necessary quantities can be approximated to a desired precision. Asthe number of data points grows, the model complexity is modified by a Bayesian model selection strategy. The properties oftwo versions of the criterion ate demonstrated in numerical experiments.
Adaptive Elastic Input Field for Recognition Improvement
For machines to perform classification tasks, such as speech and character recognition, appropriately handling deformed patterns is a key to achieving high performance. The authors presents a new type of classification system, an Adaptive Input Field Neural Network(AIFNN), which includes a simple pre-trained neural network and an elastic input field attached to an input layer. By using an iterative method, AIFNN can determine an optimal affine translation for an elastic input field to compensate for the original deformations. The convergence of the AIFNN algorithm is shown. AIFNN is applied for handwritten numerals recognition. Consequently, 10.83%of originally misclassified patterns are correctly categorized and total performance is improved, without modifying the neural network. 1 Introduction For machines to accomplish classification tasks, such as speech and character recognition, appropriatelyhandling deformed patterns is a key to achieving high performance [Simard 92] [Simard 93] [Hinton 92] [Barnard 91]. The number of reasonable deformations of patterns is enormous, since they can be either linear translations (an affine translation or a time shifting) or nonlinear deformations (a set of combinations ofpartial translations), or both. Although a simple neural network (e.g. a 3-layered neural network) is able to adapt 1102 MinoruAsogawa
A Non-linear Information Maximisation Algorithm that Performs Blind Separation
Bell, Anthony J., Sejnowski, Terrence J.
With the exception of (Becker 1992), there has been little attempt to use non-linearity in networks to achieve something a linear network could not. Nonlinear networks, however, are capable of computing more general statistics than those second-order ones involved in decorrelation, and as a consequence they are capable of dealing with signals (and noises) which have detailed higher-order structure. The success of the'H-J' networks at blind separation (Jutten & Herault 1991)suggests that it should be possible to separate statistically independent components, by using learning rules which make use of moments of all orders. This paper takes a principled approach to this problem, by starting with the question ofhow to maximise the information passed on in nonlinear feed-forward network. Startingwith an analysis of a single unit, the approach is extended to a network mapping N inputs to N outputs. In the process, it will be shown that, under certain fairly weak conditions, the N ---. N network forms a minimally redundant encodingofthe inputs, and that it therefore performs Independent Component Analysis (ICA). 2 Information maximisation The information that output Y contains about input X is defined as: I(Y, X) H(Y) - H(YIX) (1) where H(Y) is the entropy (information) in the output, while H(YIX) is whatever information the output has which didn't come from the input. In the case that we have no noise (or rather, we don't know what is noise and what is signal in the input), the mapping between X and Y is deterministic and H(YIX) has its lowest possible value of
An Integrated Architecture of Adaptive Neural Network Control for Dynamic Systems
Liu, Ke, Tokar, Robert L., McVey, Brain D.
Most neural network control architectures originate from work presented by Narendra[I), Psaltis[2) and Lightbody[3). In these architectures, an identification neural network is trained to function as a model for the plant. Based on the neural network identification model, a neural network controller is trained by backpropagating the error through the identification network. After training, the identification network is replaced by the real plant. As is illustrated in Figure 1, the controller receives external inputs as well as plant state feedback inputs. Training procedures are employed such that the networks approximate feed forward control surfaces that are functions of external inputs and state feedbacks of the plant (or the identification network during training).
A Model of the Neural Basis of the Rat's Sense of Direction
Skaggs, William E., Knierim, James J., Kudrimoti, Hemant S., McNaughton, Bruce L.
In the last decade the outlines of the neural structures subserving the sense of direction have begun to emerge. Several investigations have shed light on the effects of vestibular input and visual input on the head direction representation. In this paper, a model is formulated of the neural mechanisms underlying the head direction system. The model is built out of simple ingredients, depending on nothing more complicated than connectional specificity, attractor dynamics, Hebbian learning, and sigmoidal nonlinearities, but it behaves in a sophisticated way and is consistent with most of the observed properties ofreal head direction cells. In addition it makes a number of predictions that ought to be testable by reasonably straightforward experiments.