Not enough data to create a plot.
Try a different view from the menu above.
Information Technology
On Properties of Networks of Neuron-Like Elements
Baldi, Pierre, Venkatesh, Santosh S.
In this article we consider two aspects of computation with neural networks. Firstly we consider the problem of the complexity of the network required to compute classes of specified (structured) functions. We give a brief overview of basic known complexity theoremsfor readers familiar with neural network models but less familiar with circuit complexity theories. We argue that there is considerable computational and physiological justification for the thesis that shallow circuits (Le., networks with relatively few layers) are computationally more efficient. We hence concentrate on structured (as opposed to random) problems that can be computed in shallow (constant depth)circuits with a relatively few number (polynomial) of elements, and demonstrate classes of structured problems that are amenable to such low cost solutions. Wediscuss an allied problem-the complexity of learning-and close with some open problems and a discussion of the observed limitations of the theoretical approach. Wenext turn to a rigourous classification of how much a network of given structure can do; i.e., the computational capacity of a given construct.
Presynaptic Neural Information Processing
Current knowledge about the activity dependence of the firing threshold, the conditions required for conduction failure, and the similarity of nodes along a single axon will be reviewed. An electronic circuit model for a site of low conduction safety in an axon will be presented. In response to single frequency stimulation the electronic circuit acts as a lowpass filter. I. INTRODUCTION The axon is often modeled as a wire which imposes a fixed delay on a propagating signal. Using this model, neural information processing is performed by synaptically sum m ing weighted contributions of the outputs from other neurons.
A Trellis-Structured Neural Network
Petsche, Thomas, Dickinson, Bradley W.
We have presented a locally interconnected network which minimizes a function that is analogous to the log likelihood function near the global minimum. The results of simulations demonstrate that the network can successfully decode input sequences containing no noise at least as well as the globally connected Hopfield-Tank [6] decomposition network.Simulations also strongly support the conjecture that in the noiseless case, the network can be guaranteed to converge to the global minimum. In addition, for low error rates, the network can also decode noisy received sequences. We have been able to apply the Cohen-Grossberg proof of the stability of "oncenter off-surround"networks to show that each stage will maximize the desired local "likelihood" for noisy received sequences. We have also shown that, in the large gain limit, the network as a whole is stable and that the equilibrium points correspond to the MLSE decoder output.
Learning Representations by Recirculation
Hinton, Geoffrey E., McClelland, James L.
One criticism of back-propagation is that it requires a teacher to specify the desired output vectors. It is possible to dispense with the teacher in the case of "encoder" networks2 in which the desired output vector is identical with the input vector (see Figure 1). The purpose of an encoder network is to learn good "codes" in the intermediate, hidden units. If for, example, there are less hidden units than input units, an encoder network will perform data-compression3.
A Dynamical Approach to Temporal Pattern Processing
Stornetta, W. Scott, Hogg, Tad, Huberman, Bernardo A.
W. Scott Stornetta Stanford University, Physics Department, Stanford, Ca., 94305 Tad Hogg and B. A. Huberman Xerox Palo Alto Research Center, Palo Alto, Ca. 94304 ABSTRACT Recognizing patterns with temporal context is important for such tasks as speech recognition, motion detection and signature verification. We propose an architecture in which time serves as its own representation, and temporal context is encoded in the state of the nodes. We contrast this with the approach of replicating portions of the architecture to represent time. As one example of these ideas, we demonstrate an architecture with capacitive inputs serving as temporal feature detectors in an otherwise standard back propagation model. Experiments involving motion detection and word discrimination serve to illustrate novel features of the system.
Probabilistic Characterization of Neural Model Computations
Learning algorithms for the neural network which search for the "most probable" member of P can then be designed. Statistical tests which decide if the "true" or environmental probability distribution is in P can also be developed. Example applications of the theory to the highly nonlinear back-propagation learning algorithm, and the networks of Hopfield and Anderson are discussed. INTRODUCTION A connectionist system is a network of simple neuron-like computing elements which can store and retrieve information, and most importantly make generalizations. Using terminology suggested by Rumelhart & McClelland 1, the computing elements of a connectionist system are called units, and each unit is associated with a real number indicating its activity level. The activity level of a given unit in the system can also influence the activity level of another unit. The degree of influence between two such units is often characterized by a parameter of the system known as a connection strength. During the information retrievalprocess some subset of the units in the system are activated, and these units in turn activate neighboring units via the inter-unit connection strengths.
An Optimization Network for Matrix Inversion
Jang, Ju-Seog, Lee, Soo-Young, Shin, Sang-Yung
Box 150, Cheongryang, Seoul, Korea ABSTRACT Inverse matrix calculation can be considered as an optimization. We have demonstrated that this problem can be rapidly solved by highly interconnected simple neuron-like analog processors. A network for matrix inversion based on the concept of Hopfield's neural network was designed, and implemented with electronic hardware. With slight modifications, the network is readily applicable to solving a linear simultaneous equation efficiently. Notable features of this circuit are potential speed due to parallel processing, and robustness against variations of device parameters.
Scaling Properties of Coarse-Coded Symbol Memories
Rosenfeld, Ronald, Touretzky, David S.
DCPS' memory scheme is a modified version of the Random Receptors method [5]. The symbol space is the set of all triples over a 25 letter alphabet. Units have fixed-size receptive fields organized as 6 x 6 x 6 subspaces. Patterns are manipulated to minimize the variance in pattern size across symbols.
Analysis of Distributed Representation of Constituent Structure in Connectionist Systems
The method allows the fully distributed representation of symbolic structures: the roles in the structures, as well as the fillers for those roles, can be arbitrarily non-local. Fully and partially localized special cases reduce to existing cases of connectionist representations of structured data; the tensor product representation generalizes these and the few existing examples of fuUy distributed representations of structures. The representation saturates gracefully as larger structures are represented; it pennits recursive construction of complex representations from simpler ones; it respects the independence of the capacities to generate and maintain multiple bindings in parallel; it extends naturally to continuous structures and continuous representational patterns; it pennits values to also serve as variables; it enables analysis of the interference of symbolic structures stored in associative memories; and it leads to characterization of optimal distributed representations of roles and a recirculation algorithm for learning them. Introduction Any model of complex infonnation processing in networks of simple processors must solve the problem of representing complex structures over network elements. Connectionist models of realistic natural language processing, for example, must employ computationally adequate representations of complex sentences. Many connectionists feel that to develop connectionist systems with the computational power required by complex tasks, distributed representations must be used: an individual processing unit must participate in the representation of multiple items, and each item must be represented as a pattern of activity of multiple processors. Connectionist models have used more or less distributed representations of more or less complex structures, but little if any general analysis of the problem of distributed representation of complex infonnation has been carried out This paper reports results of an analysis of a general method called the tensor product representation.