Not enough data to create a plot.
Try a different view from the menu above.
Lg Depth Estimation and Ripple Fire Characterization Using Artificial Neural Networks
Perry, John L., Baumgardt, Douglas R.
This srudy has demonstrated how artificial neural networks (ANNs) can be used to characterize seismic sources using high-frequency regional seismic data. We have taken the novel approach of using ANNs as a research tool for obtaining seismic source information, specifically depth of focus for earthquakes and ripple-fire characteristics for economic blasts, rather than as just a feature classifier between earthquake and explosion populations. Overall, we have found that ANNs have potential applications to seismic event characterization and identification, beyond just as a feature classifier. In future studies, these techniques should be applied to actual data of regional seismic events recorded at the new regional seismic arrays. The results of this study indicates that an ANN should be evaluated as part of an operational seismic event identification system. 1 INTRODUCTION ANNs have usually been used as pattern matching algorithms, and recent studies have applied ANNs to standard classification between classes of earthquakes and explosions using wavefonn features (Dowla, et al, 1989), (Dysart and Pulli, 1990).
Leaning by Combining Memorization and Gradient Descent
We have created a radial basis function network that allocates a new computational unit whenever an unusual pattern is presented to the network. The network learns by allocating new units and adjusting the parameters of existing units. If the network performs poorly on a presented pattern, then a new unit is allocated which memorizes the response to the presented pattern. If the network performs well on a presented pattern, then the network parameters are updated using standard LMS gradient descent. For predicting the Mackey Glass chaotic time series, our network learns much faster than do those using back-propagation and uses a comparable number of synapses.
Spoken Letter Recognition
Through the use of neural network classifiers and careful feature selection, we have achieved high-accuracy speaker-independent spoken letter recognition. Forisolated letters, a broad-category segmentation is performed Location of segment boundaries allows us to measure features at specific locations in the signal such as vowel onset, where important information resides. Letter classification is performed with a feed-forward neural network. Recognitionaccuracy on a test set of 30 speakers was 96%. Neural network classifiers are also used for pitch tracking and broad-category segmentation of letter strings.
Designing Linear Threshold Based Neural Network Pattern Classifiers
Terrence L. Fine School of Electrical Engineering Cornell University Ithaca, NY 14853 Abstract The three problems that concern us are identifying a natural domain of pattern classification applications of feed forward neural networks, selecting anappropriate feedforward network architecture, and assessing the tradeoff between network complexity, training set size, and statistical reliability asmeasured by the probability of incorrect classification. We close with some suggestions, for improving the bounds that come from Vapnik Chervonenkis theory, that can narrow, but not close, the chasm between theory and practice. Neural networks are appropriate as pattern classifiers when the pattern sources are ones of which we have little understanding, beyond perhaps a nonparametric statistical model, but we have been provided with classified samples of features drawn from each of the pattern categories. Neural networks should be able to provide rapid and reliable computation of complex decision functions. The issue in doubt is their statistical response to new inputs.
Language Induction by Phase Transition in Dynamical Recognizers
A higher order recurrent neural network architecture learns to recognize and generate languages after being "trained" on categorized exemplars. Studying these networks from the perspective of dynamical systems yields two interesting discoveries: First, a longitudinal examination of the learning process illustrates a new form of mechanical inference: Induction by phase transition. A small weight adjustment causes a "bifurcation" in the limit behavior of the network.
A Method for the Efficient Design of Boltzmann Machines for Classiffication Problems
A Boltzmann machine ([AHS], [HS], [AK]) is a neural network model in which the units update their states according to a stochastic decision rule. It consists of a set U of units, a set C of unordered pairs of elements of U, and an assignment of connection strengths S: C -- R. A configuration of a Boltzmann machine is a map k: U -- {O, I}.
Connection Topology and Dynamics in Lateral Inhibition Networks
Marcus, C.M, Waugh, F. R., Westervelt, R. M.
We show analytically how the stability of two-dimensional lateral inhibition neural networks depends on the local connection topology. For various network topologies, we calculate the critical time delay for the onset of oscillation in continuous-time networks and present analytic phase diagrams characterizing the dynamics of discrete-time networks.
Discrete Affine Wavelet Transforms For Anaylsis And Synthesis Of Feedfoward Neural Networks
Pati, Y. C., Krishnaprasad, P. S.
In this paper we show that discrete affine wavelet transforms can provide a tool for the analysis and synthesis of standard feedforward neural networks. Itis shown that wavelet frames for L2(IR) can be constructed based upon sigmoids. The spatia-spectral localization property of wavelets can be exploited in defining the topology and determining the weights of a feedforward network. Training a network constructed using the synthesis proceduredescribed here involves minimization of a convex cost functional andtherefore avoids pitfalls inherent in standard backpropagation algorithms. Extension of these methods to L2(IRN) is also discussed. 1 INTRODUCTION Feedforward type neural network models constructed from empirical data have been found to display significant predictive power [6]. Mathematical justification in support ofsuch predictive power may be drawn from various density and approximation theorems [1, 2, 5].
Learning Theory and Experiments with Competitive Networks
Bilbro, Griff L., Bout, David E. van den
Van den Bout North Carolina State University Box 7914 Raleigh, NC 27695-7914 We apply the theory of Tishby, Levin, and Sol1a (TLS) to two problems. First we analyze an elementary problem for which we find the predictions consistent with conventional statistical results. Second we numerically examine the more realistic problem of training a competitive net to learn a probability density from samples. We find TLS useful for predicting average training behavior. . 1 TLS APPLIED TO LEARNING DENSITIES Recently a theory of learning has been constructed which describes the learning of a relation from examples (Tishby, Levin, and Sol1a, 1989), (Schwarb, Samalan, Sol1a, and Denker, 1990). The original derivation relies on a statistical mechanics treatment of the probability of independent events in a system with a specified average value of an additive error function. The resulting theory is not restricted to learning relations and it is not essentially statistical mechanical. The TLS theory can be derived from the principle of mazimum entropy,a general inference tool which produces probabilities characterized by certain values of the averages of specified functions(Jaynes, 1979). A TLS theory can be constructed whenever the specified function is additive and associated with independent examples. In this paper we treat the problem of learning a probability density from samples.
Reconfigurable Neural Net Chip with 32K Connections
Graf, H. P., Janow, R., Henderson, D., Lee, R.
H.P. Graf, R. Janow, D. Henderson, and R. Lee AT&T Bell Laboratories, Room 4G320, Holmdel, NJ 07733 Abstract We describe a CMOS neural net chip with a reconfigurable network architecture. Itcontains 32,768 binary, programmable connections arranged in 256 'building block' neurons. Several'building blocks' can be connected to form long neurons with up to 1024 binary connections or to form neurons with analog connections. Single-or multi-layer networks can be implemented withthis chip. We have integrated this chip into a board system together with a digital signal processor and fast memory.