Plotting

 Information Technology


Simulations Suggest Information Processing Roles for the Diverse Currents in Hippocampal Neurons

Neural Information Processing Systems

ABSTRACT A computer model of the hippocampal pyramidal cell (HPC) is described which integrates data from a variety of sources in order to develop a consistent description for this cell type. The model presently includes descriptions of eleven nonlinear somatic currents of the HPC, and the electrotonic structure of the neuron is modelled with a soma/short-cable approximation. Model simulations qualitatively or quantitatively reproduce a wide range of somatic electrical behavior i HPCs, and demonstrate possible roles for the various currents in information processing. There are several substrates for neuronal computation, including connectivity, synapses, morphometries of dendritic trees, linear parameters of cell membrane, as well as nonlinear, time-varying membrane conductances, also referred to as currents or channels. In the classical description of neuronal function, the contribution of membrane channels is constrained to that of generating the action potential, setting firing threshold, and establishing the relationship between (steady-state) stimulus intensity and firing frequency.


A Novel Net that Learns Sequential Decision Process

Neural Information Processing Systems

We propose a new scheme to construct neural networks to classify patterns. The new scheme has several novel features: 1. We focus attention on the important attributes of patterns in ranking order.


Towards an Organizing Principle for a Layered Perceptual Network

Neural Information Processing Systems

TOWARDS AN ORGANIZING PRINCIPLE FOR A LAYERED PERCEPTUAL NETWORK Ralph Linsker IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10598 Abstract An information-theoretic optimization principle is proposed for the development of each processing stage of a multilayered perceptual network. This principle of "maximum information preservation" states that the signal transformation that is to be realized at each stage is one that maximizes the information that the output signal values (from that stage) convey about the input signals values (to that stage), subject to certain constraints and in the presence of processing noise. The quantity being maximized is a Shannon information rate. I provide motivation for this principle and -- for some simple model cases -- derive some of its consequences, discuss an algorithmic implementation, and show how the principle may lead to biologically relevant neural architectural features such as topographic maps, map distortions, orientation selectivity, and extraction of spatial and temporal signal correlations. A possible connection between this information-theoretic principle and a principle of minimum entropy production in nonequilibrium thermodynamics is suggested. Introduction This paper describes some properties of a proposed information-theoretic organizing principle for the development of a layered perceptual network.


Programmable Synaptic Chip for Electronic Neural Networks

Neural Information Processing Systems

PROGRAMMABLE SYNAPTIC CHIP FOR ELECTRONIC NEURAL NETWORKS A. Moopenn, H. Langenbacher, A.P. Thakoor, and S.K. Khanna Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91009 ABSTRACT A binary synaptic matrix chip has been developed for electronic neural networks. The matrix chip contains a programmable 32X32 array of "long channel" NMOSFET binary connection elements implemented in a 3-um bulk CMOS process. Since the neurons are kept offchip, the synaptic chip serves as a "cascadable" building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silcon area.


The Capacity of the Kanerva Associative Memory is Exponential

Neural Information Processing Systems

THE CAPACITY OF THE KANERVA ASSOCIATIVE MEMORY IS EXPONENTIAL P. A. Chou CA 94305 ABSTRACT The capacity of an associative memory is defined as the maximum number of vords that can be stored and retrieved reliably by an address vithin a given sphere of attraction. It is shown by sphere packing arguments that as the address length increases. This exponential grovth in capacity can actually be achieved by the Kanerva associative memory. Formulas for these op.timal values are provided. The exponential grovth in capacity for the Kanerva associative memory contrasts sharply vith the sub-linear grovth in capacity for the Hopfield associative memory.


Bit-Serial Neural Networks

Neural Information Processing Systems

This arises from representation which gives rise to gentle degradation as faults appear. These functions are attractive to implementation in VLSI and WSI. For example, the natural be useful in silicon wafers with imperfect yield, where thefault - tolerance could is approximately proportional to the non-functioning siliconnetwork degradation area. To cast neural networks in engineering language, a neuron is a state machine that is either "on" or "off', which in general assumes intermediate states as it switches The synapses weighting the signals from asmoothly between these extrema.


Centric Models of the Orientation Map in Primary Visual Cortex

Neural Information Processing Systems

Centric Models of the Orientation Map in Primary Visual Cortex William Baxter of Computer Science, S.U.N.Y. at Buffalo, NY 14620Department Bruce Dow Department of Physiology, S.U.N.Y. at Buffalo, NY 14620 Abstract the visual cortex of the monkey the horizontal organization of the preferredIn of orientation-selective cells follows two opposing rules: 1) neighbors tendorientations Several orientation models which satisfy these constraints are found in the spacing and the topological index of their singularities. Using the rateto differ of orientation change as a measure, the models are compared to published experimental results. Introduction It has been known for some years that there exist orientation-sensitive neurons in the visual cortex of cats and mOnkeysl,2. These cells react to highly specific patterns of light occurring in narrowly circumscribed regiOns of the visual field, i.e., the cell's receptive field. The best patterns for such cells are typically not diffuse levels of but elongated bars or edges oriented at specific angles.


PATTERN CLASS DEGENERACY IN AN UNRESTRICTED STORAGE DENSITY MEMORY

Neural Information Processing Systems

ABSTRACT The study of distributed memory systems has produced a number of models which work well in limited domains. However, until recently, the application of such systems to realworld problemshas been difficult because of storage limitations, and their inherent architectural (and for serial simulation, computational) complexity. Recent development of memories with unrestricted storage capacity and economical feedforward architectures has opened the way to the application of such systems to complex pattern recognition problems. However, such problems are sometimes underspecified by the features which describe the environment, and thus a significant portion of the pattern environment is often non-separable. We will review current work on high density memory systems and their network implementations.



Strategies for Teaching Layered Networks Classification Tasks

Neural Information Processing Systems

There is a widespread misconception that the delta-rule is in some sense guaranteed to work on networks without hidden units. As previous authors have mentioned, there is no such guarantee for classification tasks. We will begin by presenting explicit counterexamples illustratingtwo different interesting ways in which the delta rule can fail. We go on to provide conditions which do guarantee that gradient descent will successfully train networks without hidden units to perform two-category classification tasks. We discuss the generalization of our ideas to networks with hidden units and to multicategory classificationtasks.