Plotting

 Industry


A Novel Approach to Prediction of the 3-Dimensional Structures of Protein Backbones by Neural Networks

Neural Information Processing Systems

One current aim of molecular biology is determination of the (3D) tertiary structures ofproteins in their folded native state from their sequences of amino acid 523 524 Fredholm, Bohr, Bohr, Brunak, Cotterill, Lautrup, and Thtersen residues. Since Kendrew & Perutz solved the first protein structures, myoglobin and hemoglobin, and explained from the discovered structures how these proteins perform their function, it has been widely recognized that protein function is intimately linkedwith protein structure[l]. Within the last two decades X-ray crystallographers have solved the 3-dimensional (3D) structures of a steadily increasing number of proteins in the crystalline state, and recently 2D-NMR spectroscopy has emerged as an alternative method for small proteins in solution. Today approximately three hundred 3D structures have been solved by these methods, although only about half of them can be considered as truly different, and only around a hundred of them are solved at high resolution (that is, less than 2A). The number of protein sequences known today is well over 20,000, and this number seems to be growing at least one order of magnitude faster than the number of known 3D protein structures. Obviously, it is of great importance to develop tools that can predict structural aspects of proteins on the basis of knowledge acquired from known 3D structures.


On the Circuit Complexity of Neural Networks

Neural Information Processing Systems

Viewing n-variable boolean functions as vectors in'R'2", we invoke tools from linear algebra and linear programming to derive new results on the realizability of boolean functions using threshold gat.es. Using this approach, one can obtain: (1) upper-bounds on the number of spurious memories in HopfielJ networks, and on the number of functions implementable by a depth-d threshold circuit; (2) a lower bound on the number of ort.hogonal input.



Connectionist Music Composition Based on Melodic and Stylistic Constraints

Neural Information Processing Systems

We describe a recurrent connectionist network, called CONCERT, that uses a set of melodies written in a given style to compose new melodies in that style. CONCERT is an extension of a traditional algorithmic composition technique inwhich transition tables specify the probability of the next note as a function of previous context. A central ingredient of CONCERT is the use of a psychologically-grounded representation of pitch.


Neural Networks Structured for Control Application to Aircraft Landing

Neural Information Processing Systems

A recurrent back-propagation neural network architecture was then designed to numerically estimate the parameters of an optimal nonlinear control law for landing theaircraft. The performance of the network was then evaluated.


Development and Spatial Structure of Cortical Feature Maps: A Model Study

Neural Information Processing Systems

K. Schulten Beckman-Insti t ute University of Illinois Urbana, IL 61801 Feature selective cells in the primary visual cortex of several species are organized inhierarchical topographic maps of stimulus features like "position in visual space", "orientation" and" ocular dominance". In order to understand anddescribe their spatial structure and their development, we investigate aself-organizing neural network model based on the feature map algorithm. The model explains map formation as a dimension-reducing mapping from a high-dimensional feature space onto a two-dimensional lattice, such that "similarity" between features (or feature combinations) is translated into "spatial proximity" between the corresponding feature selective cells. The model is able to reproduce several aspects of the spatial structure of cortical maps in the visual cortex. 1 Introduction Cortical maps are functionally defined structures of the cortex, which are characterized byan ordered spatial distribution of functionally specialized cells along the cortical surface. In the primary visual area(s) the response properties of these cells must be described by several independent features, and there is a strong tendency to map combinations of these features onto the cortical surface in a way that translates "similarity" into "spatial proximity" of the corresponding feature selective cells (see e.g.


A four neuron circuit accounts for change sensitive inhibition in salamander retina

Neural Information Processing Systems

In salamander retina, the response of On-Off ganglion cells to a central flash is reduced by movement in the receptive field surround. Through computer simulation of a 2-D model which takes into account their anatomical and physiological properties, we show that interactions between four neuron types (two bipolar and two amacrine) may be responsible for the generation and lateral conductance of this change sensitive inhibition. The model shows that the four neuron circuit can account for previously observed movement sensitive reductions in ganglion cell sensitivity and allows visualization and prediction of the spatiotemporal pattern of activity in change sensitive retinal cells.


Generalization Properties of Radial Basis Functions

Neural Information Processing Systems

Atkeson Brain and Cognitive Sciences Department and the Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 Abstract We examine the ability of radial basis functions (RBFs) to generalize. We compare the performance of several types of RBFs. We use the inverse dynamics ofan idealized two-joint arm as a test case. We find that without a proper choice of a norm for the inputs, RBFs have poor generalization properties. A simple global scaling of the input variables greatly improves performance.


Interaction Among Ocularity, Retinotopy and On-center/Off-center Pathways During Development

Neural Information Processing Systems

The development of projections from the retinas to the cortex is mathematically analyzed according to the previously proposed thermodynamic formulation of the self-organization of neural networks. Three types of submodality included in the visual afferent pathways are assumed in two models: model (A), in which the ocularity and retinotopy are considered separately, and model (B), in which on-center/off-center pathways are considered in addition to ocularity and retinotopy. Model (A) shows striped ocular dominance spatial patterns and, in ocular dominance histograms, reveals a dip in the binocular bin. Model (B) displays spatially modulated irregular patterns and shows single-peak behavior in the histograms. When we compare the simulated results with the observed results, it is evident that the ocular dominance spatial patterns and histograms for models (A) and (B) agree very closely with those seen in monkeys and cats.


Relaxation Networks for Large Supervised Learning Problems

Neural Information Processing Systems

Feedback connections are required so that the teacher signal on the output neurons can modify weights during supervised learning. Relaxation methods are needed for learning static patterns with full-time feedback connections. Feedback network learning techniques have not achieved wide popularity because of the still greater computational efficiency of back-propagation. We show by simulation that relaxation networks of the kind we are implementing in VLSI are capable of learning large problems just like back-propagation networks. A microchip incorporates deterministic mean-field theory learning as well as stochastic Boltzmann learning. A multiple-chip electronic system implementing these networks will make high-speed parallel learning in them feasible in the future.