Not enough data to create a plot.
Try a different view from the menu above.
Industry
On Stochastic Complexity and Admissible Models for Neural Network Classifiers
For a detailed rationale the reader is referred to the work of Rissanen (1984) or Wallace and Freeman (1987) and the references therein. Note that the Minimum Description Length (MDL) technique (as Rissanen's approach has become known) is implicitly related to Maximum A Posteriori (MAP) Bayesian estimation techniques if cast in the appropriate framework.
A Novel Approach to Prediction of the 3-Dimensional Structures of Protein Backbones by Neural Networks
Fredholm, Henrik, Bohr, Henrik, Bohr, Jakob, Brunak, Sรธren, Cotterill, Rodney M. J., Lautrup, Benny, Petersen, Steffen B.
Since Kendrew & Perutz solved the first protein structures, myoglobin and hemoglobin, and explained from the discovered structures how these proteins perform their function, it has been widely recognized that protein function is intimately linked with protein structure[l]. Within the last two decades X-ray crystallographers have solved the 3-dimensional (3D) structures of a steadily increasing number of proteins in the crystalline state, and recently 2D-NMR spectroscopy has emerged as an alternative method for small proteins in solution. Today approximately three hundred 3D structures have been solved by these methods, although only about half of them can be considered as truly different, and only around a hundred of them are solved at high resolution (that is, less than 2A). The number of protein sequences known today is well over 20,000, and this number seems to be growing at least one order of magnitude faster than the number of known 3D protein structures. Obviously, it is of great importance to develop tools that can predict structural aspects of proteins on the basis of knowledge acquired from known 3D structures.
VLSI Implementations of Learning and Memory Systems: A Review
ABSTRACT A large number of VLSI implementations of neural network models have been reported. The diversity of these implementations is noteworthy. This paper attempts to put a group of representative VLSI implementations in perspective by comparing and contrasting them. Design tradeoffs are discussed and some suggestions forthe direction of future implementation efforts are made. IMPLEMENTATION Changing the way information is represented can be beneficial.
Interaction Among Ocularity, Retinotopy and On-center/Off-center Pathways During Development
The development of projections from the retinas to the cortex is mathematically analyzed according to the previously proposed thermodynamic formulation of the self-organization of neural networks. Three types of submodality included in the visual afferent pathways are assumed in two models: model (A), in which the ocularity and retinotopy are considered separately, and model (B), in which on-center/off-center pathways are considered in addition to ocularity and retinotopy. Model (A) shows striped ocular dominance spatial patterns and, in ocular dominance histograms, reveals a dip in the binocular bin. Model (B) displays spatially modulated irregular patterns and shows single-peak behavior in the histograms. When we compare the simulated results with the observed results, it is evident that the ocular dominance spatial patterns and histograms for models (A) and (B) agree very closely with those seen in monkeys and cats.
Discrete Affine Wavelet Transforms For Anaylsis And Synthesis Of Feedfoward Neural Networks
Pati, Y. C., Krishnaprasad, P. S.
In this paper we show that discrete affine wavelet transforms can provide a tool for the analysis and synthesis of standard feedforward neural networks. It is shown that wavelet frames for L2(IR) can be constructed based upon sigmoids. The spatia-spectral localization property of wavelets can be exploited in defining the topology and determining the weights of a feedforward network. Training a network constructed using the synthesis procedure described here involves minimization of a convex cost functional and therefore avoids pitfalls inherent in standard backpropagation algorithms. Extension of these methods to L2(IRN) is also discussed.
Relaxation Networks for Large Supervised Learning Problems
Alspector, Joshua, Allen, Robert B., Jayakumar, Anthony, Zeppenfeld, Torsten, Meir, Ronny
Feedback connections are required so that the teacher signal on the output neurons can modify weights during supervised learning. Relaxation methods are needed for learning static patterns with full-time feedback connections. Feedback network learning techniques have not achieved wide popularity because of the still greater computational efficiency of back-propagation. We show by simulation that relaxation networks of the kind we are implementing in VLSI are capable of learning large problems just like back-propagation networks. A microchip incorporates deterministic mean-field theory learning as well as stochastic Boltzmann learning. A multiple-chip electronic system implementing these networks will make high-speed parallel learning in them feasible in the future.
An Analog VLSI Splining Network
Schwartz, Daniel B., Samalam, Vijay K.
We have produced a VLSI circuit capable of learning to approximate arbitrary smooth of a single variable using a technique closely related to splines. The circuit effectively has 512 knots space on a uniform grid and has full support for learning. The circuit also can be used to approximate multi-variable functions as sum of splines. An interesting, and as of yet, nearly untapped set of applications for VLSI implementation of neural network learning systems can be found in adaptive control and nonlinear signal processing. In most such applications, the learning task consists of approximating a real function of a small number of continuous variables from discrete data points.
On the Circuit Complexity of Neural Networks
Roychowdhury, V. P., Siu, K. Y., Orlitsky, A., Kailath, T.
Viewing n-variable boolean functions as vectors in'R'2", we invoke tools from linear algebra and linear programming to derive new results on the realizability of boolean functions using threshold gat.es. Using this approach, one can obtain: (1) upper-bounds on the number of spurious memories in HopfielJ networks, and on the number of functions implementable by a depth-d threshold circuit; (2) a lower bound on the number of ort.hogonal input.
A B-P ANN Commodity Trader
Joseph E. Collard Martingale Research Corporation 100 Allentown Pkwy., Suite 211 Allen, Texas 75002 Abstract An Artificial Neural Network (ANN) is trained to recognize a buy/sell (long/short) pattern for a particular commodity future contract. The Back Propagation of errors algorithm was used to encode the relationship between the Long/Short desired output and 18 fundamental variables plus 6 (or 18) technical variables into the ANN. Trained on one year of past data the ANN is able to predict long/short market positions for 9 months in the future that would have made $10,301 profit on an investment of less than $1000. 1 INTRODUCTION An Artificial Neural Network (ANN) is trained to recognize a long/short pattern for a particular commodity future contract. The Back-Propagation of errors algorithm was used to encode the relationship between the Long/Short desired output and 18 fundamental variables plus 6 (or 18) technical variables into the ANN. 2 NETWORK ARCHITECTURE The ANNs used were simple, feed forward, single hidden layer networks with no input units, N hidden units and one output unit. N varied from six (6) through sixteen (16) hidden units.