Not enough data to create a plot.
Try a different view from the menu above.
Country
Centric Models of the Orientation Map in Primary Visual Cortex
Centric Models of the Orientation Map in Primary Visual Cortex William Baxter of Computer Science, S.U.N.Y. at Buffalo, NY 14620Department Bruce Dow Department of Physiology, S.U.N.Y. at Buffalo, NY 14620 Abstract the visual cortex of the monkey the horizontal organization of the preferredIn of orientation-selective cells follows two opposing rules: 1) neighbors tendorientations Several orientation models which satisfy these constraints are found in the spacing and the topological index of their singularities. Using the rateto differ of orientation change as a measure, the models are compared to published experimental results. Introduction It has been known for some years that there exist orientation-sensitive neurons in the visual cortex of cats and mOnkeysl,2. These cells react to highly specific patterns of light occurring in narrowly circumscribed regiOns of the visual field, i.e., the cell's receptive field. The best patterns for such cells are typically not diffuse levels of but elongated bars or edges oriented at specific angles.
PATTERN CLASS DEGENERACY IN AN UNRESTRICTED STORAGE DENSITY MEMORY
Scofield, Christopher L., Reilly, Douglas L., Elbaum, Charles, Cooper, Leon N.
ABSTRACT The study of distributed memory systems has produced a number of models which work well in limited domains. However, until recently, the application of such systems to realworld problemshas been difficult because of storage limitations, and their inherent architectural (and for serial simulation, computational) complexity. Recent development of memories with unrestricted storage capacity and economical feedforward architectures has opened the way to the application of such systems to complex pattern recognition problems. However, such problems are sometimes underspecified by the features which describe the environment, and thus a significant portion of the pattern environment is often non-separable. We will review current work on high density memory systems and their network implementations.
Strategies for Teaching Layered Networks Classification Tasks
Wittner, Ben S., Denker, John S.
There is a widespread misconception that the delta-rule is in some sense guaranteed to work on networks without hidden units. As previous authors have mentioned, there is no such guarantee for classification tasks. We will begin by presenting explicit counterexamples illustratingtwo different interesting ways in which the delta rule can fail. We go on to provide conditions which do guarantee that gradient descent will successfully train networks without hidden units to perform two-category classification tasks. We discuss the generalization of our ideas to networks with hidden units and to multicategory classificationtasks.
Neural Net and Traditional Classifiers
Huang, William Y., Lippmann, Richard P.
Previous work on nets with continuous-valued inputs led to generative procedures to construct convex decision regions with two-layer perceptrons (one hidden layer) and arbitrary decision regions with three-layer perceptrons (two hidden layers). Here we demonstrate that two-layer perceptron classifiers trained with back propagation can form both convex and disjoint decision regions. Such classifiers are robust, train rapidly, and provide good performance with simple decision regions. When complex decision regions are required, however, convergence time can be excessively long and performance is often no better than that of k-nearest neighbor classifiers. Three neural net classifiers are presented that provide more rapid training under such situations.
Distributed Neural Information Processing in the Vestibulo-Ocular System
Lau, Clifford, Honrubia, Vicente
In this model, head motion is sensed topographically by hair cells in the semicircular canals. Hair cell signals are then processed by multiple synapses in the primary afferent neurons which exhibit a continuum of varying dynamics. The model is an application of the concept of "multilayered" neural networks to the description of findings in the bullfrog vestibular nerve, and allows us to formulate mathematically the behavior of an assembly of neurons whose physiological characteristics vary according to their anatomical properties. INTRODUCTION Traditionally the physiological properties of individual vestibular afferent neurons have been modeled as a linear time-invariant system based on Steinhausents description of cupular motion.
Correlational Strength and Computational Algebra of Synaptic Connections Between Neurons
Correlational Strength and Computational Algebra of Synaptic Connections Between Neurons Eberhard E. Fetz Department of Physiology & Biophysics, University of Washington, Seattle, WA 98195 ABSTRACT Intracellular recordings in spinal cord motoneurons and cerebral cortex neurons have provided new evidence on the correlational strength of monosynaptic connections, and the relation between the shapes of postsynaptic potentials and the associated increased firing probability. In these cells, excitatory postsynaptic potentials (EPSPs) produce crosscorrelogram peakswhich resemble in large part the derivative of the EPSP. Additional synaptic noise broadens the peak, but the peak area -- i.e., the number of above-chance firings triggered per EPSP -- remains proportional to the EPSP amplitude. The consequences of these data for information processing by polysynaptic connections is discussed. The effects of sequential polysynaptic links can be calculated by convolving the effects of the underlying monosynaptic connections.
LEARNING BY STATE RECURRENCE DETECTION
Rosen, Bruce E., Goodwin, James M., Vidal, Jacques J.
The approach is applied both to Michie and Chambers BOXES algorithm and to Barto, Sutton and Anderson's extension, the ASE/ACE system, and has significantly improved the convergence rate of stochastically based learning automata. Recurrencelearning is a new nonlinear reward-penalty algorithm. It exploits information found during learning trials to reinforce decisions resulting in the recurrence of nonfailing states. Recurrence learning applies positive reinforcement during the exploration of the search space, whereas in the BOXES or ASE algorithms, only negative weight reinforcement is applied, and then only on failure. Simulation results show that the added information from recurrence learning increases the learning rate.
Encoding Geometric Invariances in Higher-Order Neural Networks
Giles, C. Lee, Griffin, R. D., Maxwell, T.
By requiring each unit to satisfy a set of constraints on the interconnection weights, a particular structure is imposed on the network. A network built using such an architecture maintains its invariant performance independent of the values the weights assume, of the learning rules used, and of the form of the nonlinearities in the network. The invariance exhibited by a firstorder networkis usually of a trivial sort, e.g., responding only to the average input in the case of translation invariance, whereas higher-order networks can perform useful functions and still exhibit the invariance. We derive the weight constraints for translation, rotation, scale, and several combinations of these transformations, and report results of simulation studies. INTRODUCTION A persistent difficulty for pattern recognition systems is the requirement that patterns or objects be recognized independent of irrelevant parameters or distortions such as orientation (position, rotation, aspect), scale or size, background or context, doppler shift, time of occurrence, or signal duration.