A Method for the Design of Stable Lateral Inhibition Networks that is Robust in the Presence of Circuit Parasitics
Jr., John L. Wyatt, Standley, D. L.
A serious problem of unwanted spontaneous oscillation often arises with these circuits and renders them unusable in practice. This paper reports a design approach that guarantees such a system will be stable, even though the values of designed elements and parasitic elements in the resistive grid may be unknown. The method is based on a rigorous, somewhat novel mathematical analysis using Tellegen's theorem and the idea of Popov multipliers from control theory. It is thoroughly practical because the criteria are local in the sense that no overall analysis of the interconnected system is required, empirical in the sense that they involve only measurable frequency response data on the individual cells, and robust in the sense that unmodelled parasitic resistances and capacitances in the interconnection networkcannot affect the analysis. I. INTRODUCTION The term "lateral inhibition" first arose in neurophysiology to describe a common form of neural circuitry in which the output of each neuron in some population is used to inhibit the response of each of its neighbors. Perhaps the best understood example is the horizontal cell layer in the vertebrate retina, in which lateral inhibition simultaneously enhances intensity edges and acts as an automatic lain control to extend the dynamic range of the retina as a whole. The principle has been used in the design of artificial neural system algorithms by Kohonen2 and others and in the electronic design of neural chips by Carver Mead et.
The Hopfield Model with Multi-Level Neurons
The generalization replaces two state neurons by neurons taking a richer set of values. Two classes of neuron input output relations are developed guaranteeing convergence to stable states. The first is a class of "continuous" relations andthe second is a class of allowed quantization rules for the neurons.
An Artificial Neural Network for Spatio-Temporal Bipolar Patterns: Application to Phoneme Classification
Atlas, Les E., Homma, Toshiteru, II, Robert J. Marks
In biological systems, it relates to such issues as classical and operant conditioning, temporal coordination of sensorimotor systems and temporal reasoning. In artificial systems, it addresses such real-world tasks as robot control, speech recognition, dynamic image processing, moving target detection by sonars or radars, EEG diagnosis, and seismic signal processing. Most of the processing elements used in neural network models for practical applications have been the formal neuronl or" its variations. These elements lack a memory flexible to temporal patterns,thus limiting most of the neural network models previously proposed to problems of spatial (or static) patterns. Some past solutions have been to convert the dynamic problems to static ones using buffer (or storage) neurons, or using a layered network with/without feedback.
The Sigmoid Nonlinearity in Prepyriform Cortex
This relationship takes the form of a sigmoid curve, that describes normalized pulse-output for normalized wave input. The curve is fitted using nonlinear regression and is described by its slope and maximum value. Measurements were made for both excitatory and inhibitory neurons in the cortex. These neurons are known to form a monosynaptic negative feedback loop. Both classes of cells can be described by the same parameters. The sigmoid curve is asymmetric in that the region of maximal slope is displaced toward the excitatory side.
The Connectivity Analysis of Simple Association
Oregon Graduate Center, Beaverton, OR 97006 ABSTRACT The efficient realization, using current silicon technology, of Very Large Connection Networks (VLCN) with more than a billion connections requires that these networks exhibit a high degree of communication locality. Real neural networks exhibit significant locality, yet most connectionist/neural network models have little. In this paper, the connectivity requirements of a simple associative network are analyzed using communication theory. Several techniques based on communication theory are presented that improve the robustness ofthe network in the face of sparse, local interconnect structures. Also discussed are some potential problems when information is distributed too widely. INTRODUCTION Connectionist/neural network researchers are learning to program networks that exhibit a broad range of cognitive behavior.
Simulations Suggest Information Processing Roles for the Diverse Currents in Hippocampal Neurons
Lyle J. Borg-Graham Harvard-MIT Division of Health Sciences and Technology and Center for Biological Information Processing, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 ABSTRACT A computer model of the hippocampal pyramidal cell (HPC) is described which integrates data from a variety of sources in order to develop a consistent descriptionfor this cell type. The model presently includes descriptions of eleven nonlinear somatic currents of the HPC, and the electrotonic structure of the neuron is modelled with a soma/short-cable approximation. Model simulations qualitatively or quantitatively reproduce a wide range of somatic electrical behavior i HPCs, and demonstrate possible roles for the various currents in information processing. There are several substrates for neuronal computation, including connectivity, synapses,morphometries of dendritic trees, linear parameters of cell membrane, as well as nonlinear, time-varying membrane conductances, also referred to as currents or channels. In the classical description of neuronal function, the contribution of membrane channels is constrained to that of generating the action potential, setting firing threshold, and establishing the relationship between (steady-state) stimulus intensity and firing frequency.
Performance Measures for Associative Memories that Learn and Forget
The McCulloch/Pitts model discussed in [1] was one of the earliest neural network models to be analyzed. Some computational properties of what we call a Hopfield Associative Memory Network (HAMN):similar to the McCulloch/Pitts model was discussed by Hopfield in [2]. The HAMN can be measured quantitatively by defining and evaluating the information capacity as [2-6] have shown, but this network fails to exhibit more complex computational capabilities that neural network have due to its simplified structure. The HAMN belongs to a class of networks which we call static. In static networks the learning and recall procedures areseparate.