Not enough data to create a plot.
Try a different view from the menu above.
Industry
Reading a Neural Code
Bialek, William, Rieke, Fred, Steveninck, Robert R. de Ruyter van, Warland, David
Traditional methods of studying neural coding characterize the encoding of known stimuli in average neural responses. Organisms face nearly the opposite task - decoding short segments of a spike train to extract information about an unknown, time-varying stimulus. Here we present strategies for characterizing the neural code from the point of view of the organism, culminating in algorithms for real-time stimulus reconstruction based on a single sample of the spike train. These methods are applied to the design and analysis of experiments on an identified movement-sensitive neuron in the fly visual system. As far as we know this is the first instance in which a direct "reading" of the neural code has been accomplished.
Using a Translation-Invariant Neural Network to Diagnose Heart Arrhythmia
Distinctive electrocardiogram (EeG) patterns are created when the heart is beating normally and when a dangerous arrhythmia is present. Some devices which monitor the EeG and react to arrhythmias parameterize the ECG signal and make a diagnosis based on the parameters. The author discusses the use of a neural network to classify the EeG signals directly.
Can Simple Cells Learn Curves? A Hebbian Model in a Structured Environment
Softky, William R., Kammen, Daniel M.
In the mammalian visual cortex, orientation-selective'simple cells' which detect straight lines may be adapted to detect curved lines instead. We test a biologically plausible, Hebbian, single-neuron model, which learns oriented receptive fields upon exposure to unstructured (noise) input and maintains orientation selectivity upon exposure to edges or bars of all orientations and positions. This model can also learn arc-shaped receptive fields upon exposure to an environment of only circular rings. Thus, new experiments which try to induce an abnormal (curved) receptive field may provide insight into the plasticity of simple cells. The model suggests that exposing cells to only a single spatial frequency may induce more striking spatial frequency and orientation dependent effects than heretofore observed.
Reading a Neural Code
Bialek, William, Rieke, Fred, Steveninck, Robert R. de Ruyter van, Warland, David
Traditional methods of studying neural coding characterize the encoding of known stimuli in average neural responses. Organisms face nearly the opposite task - decoding short segments of a spike train to extract information about an unknown, time-varying stimulus. Here we present strategies for characterizing the neural code from the point of view of the organism, culminating in algorithms for real-time stimulus reconstruction based on a single sample of the spike train. These methods are applied to the design and analysis of experiments on an identified movement-sensitive neuron in the fly visual system. As far as we know this is the first instance in which a direct "reading" of the neural code has been accomplished.
Dynamic Behavior of Constained Back-Propagation Networks
It is generally admitted that generalization performance of back-propagation networks (Rumelhart, Hinton & Williams, 1986) will depend on the relative size ofthe training data and of the trained network. By analogy to curve-fitting and for theoretical considerations, the generalization performance of the network should decrease as the size of the network and the associated number of degrees of freedom increase (Rumelhart, 1987; Denker et al., 1987; Hanson & Pratt, 1989). This paper examines the dynamics of the standard back-propagation algorithm (BP) and of a constrained back-propagation variation (CBP), designed to adapt the size of the network to the training data base. The performance, learning dynamics and the representations resulting from the two algorithms are compared.
Predicting Weather Using a Genetic Memory: A Combination of Kanerva's Sparse Distributed Memory with Holland's Genetic Algorithms
Kanerva's sparse distributed memory (SDM) is an associative-memory model based on the mathematical properties of high-dimensional binary address spaces. Holland's genetic algorithms are a search technique for high-dimensional spaces inspired by evolutionary processes of DNA. "Genetic Memory" is a hybrid of the above two systems, in which the memory uses a genetic algorithm to dynamically reconfigure its physical storage locations to reflect correlations between the stored addresses and data. For example, when presented with raw weather station data, the Genetic Memory discovers specific features in the weather data which correlate well with upcoming rain, and reconfigures the memory to utilize this information effectively. This architecture is designed to maximize the ability of the system to scale-up to handle real-world problems.
A Reconfigurable Analog VLSI Neural Network Chip
Satyanarayana, Srinagesh, Tsividis, Yannis P., Graf, Hans Peter
The distributed-neuron synapses are arranged in blocks of 16, which we call '4 x 4 tiles'. Switch matrices are interleaved between each of these tiles to provide programmability of interconnections. With a small area overhead (15 %), the 1024 units of the network can be rearranged in various configurations. Some of the possible configurations are, a 12-32-12 network, a 16-12-12-16 network, two 12-32 networks etc. (the numbers separated by dashes indicate the number of units per layer, including the input layer). Weights are stored in analog form on MaS capacitors.
The Effect of Catecholamines on Performance: From Unit to System Behavior
Servan-Schreiber, David, Printz, Harry, Cohen, Jonathan D.
We present a model of catecholamine effects in a network of neural-like elements. We argue that changes in the responsivity of individual elements do not affect their ability to detect a signal and ignore noise. However. the same changes in cell responsivity in a network of such elements do improve the signal detection performance of the network as a whole. We show how this result can be used in a computer simulation of behavior to account for the effect of eNS stimulants on the signal detection performance of human subjects.
Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks
Barhen, Jacob, Toomarian, Nikzad Benny, Gulati, Sandeep
A methodology for faster supervised learning in dynamical nonlinear neural networks is presented. It exploits the concept of adjoint operntors to enable computation of changes in the network's response due to perturbations in all system parameters, using the solution of a single set of appropriately constructed linear equations. The lower bound on speedup per learning iteration over conventional methods for calculating the neuromorphic energy gradient is O(N2), where N is the number of neurons in the network. 1 INTRODUCTION The biggest promise of artifcial neural networks as computational tools lies in the hope that they will enable fast processing and synthesis of complex information patterns. In particular, considerable efforts have recently been devoted to the formulation of efficent methodologies for learning (e.g., Rumelhart et al., 1986; Pineda, 1988; Pearlmutter, 1989; Williams and Zipser, 1989; Barhen, Gulati and Zak, 1989). The development of learning algorithms is generally based upon the minimization of a neuromorphic energy function. The fundamental requirement of such an approach is the computation of the gradient of this objective function with respect to the various parameters of the neural architecture, e.g., synaptic weights, neural Adjoint Operator Algorithms 499