Not enough data to create a plot.
Try a different view from the menu above.
Information Technology
An Artificial Neural Network for Spatio-Temporal Bipolar Patterns: Application to Phoneme Classification
Atlas, Les E., Homma, Toshiteru, II, Robert J. Marks
In biological systems, it relates to such issues as classical and operant conditioning, temporal coordination of sensorimotor systems and temporal reasoning. In artificial systems, it addresses such real-world tasks as robot control, speech recognition, dynamic image processing, moving target detection by sonars or radars, EEG diagnosis, and seismic signal processing.
Connectivity Versus Entropy
ABSTRACT How does the connectivity of a neural network (number of synapses per neuron) relate to the complexity of the problems it can handle (measured by the entropy)? Switching theory would suggest no relation at all, since all Boolean functions can be implemented using a circuit with very low connectivity (e.g., using two-input NAND gates). However, for a network that learns a problem from examples using a local learning rule, we prove that the entropy of the problem becomes a lower bound for the connectivity of the network. INTRODUCTION The most distinguishing feature of neural networks is their ability to spontaneously learn the desired function from'training' samples, i.e., their ability to program themselves. Clearly, a given neural network cannot just learn any function, there must be some restrictions on which networks can learn which functions.
Discovering Structure from Motion in Monkey, Man and Machine
DISCOVERING STRUCfURE FROM MOTION IN MONKEY, MAN AND MACHINE Ralph M. Siegel· The Salk Institute of Biology, La Jolla, Ca. 92037 ABSTRACT The ability to obtain three-dimensional structure from visual motion is important for survival of human and nonhuman primates. Using a parallel processing model, the current work explores how the biological visual system might solve this problem and how the neurophysiologist might go about understanding the solution. In the present work, much effort has been expended mimicking the visual system. This was done for one main reason: the model was designed to help direct physiological experiments in the primate. It was hoped that if an approach for understanding the model could be developed, the approach could then be directed at the primate's visual system.
Performance Measures for Associative Memories that Learn and Forget
Recently, many modifications to the McCulloch/Pitts model have been proposed where both learning and forgetting occur. Given that the network never saturates (ceases to function effectively due to an overload of information), the learning updates can continue indefinitely. For these networks, we need to introduce performance measmes in addition to the information capacity to evaluate the different networks. We mathematically define quantities such as the plasticity of a network, the efficacy of an information vector, and the probability of network saturation. From these quantities we analytically compare different networks.
Learning a Color Algorithm from Examples
Poggio, Tomaso A., Hurlbert, Anya C.
The operator also produces simultaneous brightness contrast, as expected from the shape and sign of its surround. The output reflectance it computes for a patch of fixed input reflectance decreases linearly with increasing average irradiance of the input test vector in which the patch appears. Similarly, to us, a dark patch appears darker when against a light background than against a dark one.
A Trellis-Structured Neural Network
Petsche, Thomas, Dickinson, Bradley W.
We have presented a locally interconnected network which minimizes a function that is analogous to the log likelihood function near the global minimum. The results of simulations demonstrate that the network can successfully decode input sequences containing no noise at least as well as the globally connected Hopfield-Tank [6] decomposition network. Simulations also strongly support the conjecture that in the noiseless case, the network can be guaranteed to converge to the global minimum. In addition, for low error rates, the network can also decode noisy received sequences. We have been able to apply the Cohen-Grossberg proof of the stability of "oncenter off-surround" networks to show that each stage will maximize the desired local "likelihood" for noisy received sequences. We have also shown that, in the large gain limit, the network as a whole is stable and that the equilibrium points correspond to the MLSE decoder output. Simulations have verified this proof of stability even for relatively small gains. Unfortunately, a proof of strict Lyapunov stability is very difficult, and may not be possible, because of the cooperative connections in the network. This network demonstrates that it is possible to perform interesting functions even if only localized connections are allowed, although there may be some loss of performance.
Phasor Neural Networks
ABSTRACT A novel network type is introduced which uses unit-length 2-vectors for local variables. As an example of its applications, associative memory nets are defined and their performance analyzed. Real systems corresponding to such'phasor' models can be e.g. INTRODUCTION Most neural network models use either binary local variables or scalars combined with sigmoidal nonlinearities. Rather awkward coding schemes have to be invoked if one wants to maintain linear relations between the local signals being processed in e.g.
Speech Recognition Experiments with Perceptrons
ABSTRACT Artificial neural networks (ANNs) are capable of accurate recognition of simple speech vocabularies such as isolated digits [1]. This paper looks at two more difficult vocabularies, the alphabetic E-set and a set of polysyllabic words. The E-set is difficult because it contains weak discriminants and polysyllables are difficult because of timing variation. Polysyllabic word recognition is aided by a time pre-alignment technique based on dynamic programming and E-set recognition is improved by focusing attention. Recognition accuracies are better than 98% for both vocabularies when implemented with a single layer perceptron.