Country
Probabilistic Anomaly Detection in Dynamic Systems
This paper describes probabilistic methods for novelty detection when using pattern recognition methods for fault monitoring of dynamic systems. The problem of novelty detection is particularly acute when prior knowledge and training data only allow one to construct an incomplete classification model. Allowance must be made in model design so that the classifier will be robust to data generated by classes not included in the training phase. For diagnosis applications one practical approach is to construct both an input density model and a discriminative class model. Using Bayes' rule and prior estimates of the relative likelihood of data of known and unknown origin the resulting classification equations are straightforward.
Processing of Visual and Auditory Space and Its Modification by Experience
Rauschecker, Josef P., Sejnowski, Terrence J.
Visual spatial information is projected from the retina to the brain in a highly topographic fashion, so that 2-D visual space is represented in a simple retinotopic map. Auditory spatial information, by contrast, has to be computed from binaural time and intensity differences as well as from monaural spectral cues produced by the head and ears. Evaluation of these cues in the central nervous system leads to the generation of neurons that are sensitive to the location of a sound source in space ("spatial tuning") and, in some animal species, to auditory space maps where spatial location is encoded as a 2-D map just like in the visual system. The brain structures thought to be involved in the multimodal integration of visual and auditory spatial integration are the superior colliculus in the midbrain and the inferior parietal lobe in the cerebral cortex. It has been suggested for the owl that the visual system participates in setting up the auditory space map in the superior.
High Performance Neural Net Simulation on a Multiprocessor System with "Intelligent" Communication
Müller, Urs A., Kocheisen, Michael, Gunzinger, Anton
The performance requirements in experimental research on artificial neural nets often exceed the capability of workstations and PCs by a great amount. But speed is not the only requirement. Flexibility and implementation time for new algorithms are usually of equal importance. This paper describes the simulation of neural nets on the MUSIC parallel supercomputer, a system that shows a good balance between the three issues and therefore made many research projects possible that were unthinkable before. The system should be flexible, simple to program and the realization time should be short enough to not have an obsolete system by the time it is finished. Therefore, the fastest available standard components were used.
Recognition-based Segmentation of On-Line Cursive Handwriting
This paper introduces a new recognition-based segmentation approach to recognizing online cursive handwriting from a database of 10,000 English words. The original input stream of z, y pen coordinates is encoded as a sequence of uniform stroke descriptions that are processed by six feed-forward neural-networks, each designed to recognize letters of different sizes. Words are then recognized by performing best-first search over the space of all possible segmentations. Results demonstrate that the method is effective at both writer dependent recognition (1.7% to 15.5% error rate) and writer independent recognition (5.2% to 31.1% error rate). 1 Introduction With the advent of pen-based computers, the problem of automatically recognizing handwriting from the motions of a pen has gained much significance. Progress has been made in reading disjoint block letters [Weissman et.
Figure of Merit Training for Detection and Spotting
Chang, Eric I., Lippmann, Richard P.
Spotting tasks require detection of target patterns from a background of richly varied non-target inputs. The performance measure of interest for these tasks, called the figure of merit (FOM), is the detection rate for target patterns when the false alarm rate is in an acceptable range. A new approach to training spotters is presented which computes the FOM gradient for each input pattern and then directly maximizes the FOM using backpropagation. This eliminates the need for thresholds during training. It also uses network resources to model Bayesian a posteriori probability functions accurately only for patterns which have a significant effect on the detection accuracy over the false alarm rate of interest.
Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples
Terence D. Sanger Jet Propulsion Laboratory MS 303-310 4800 Oak Grove Drive Pasadena, CA 91109 Abstract The Singular Value Decomposition (SVD) is an important tool for linear algebra and can be used to invert or approximate matrices. Although many authors use "SVD" synonymously with "Eigenvector Decomposition"or "Principal Components Transform", it is important to realize that these other methods apply only to symmetric matrices, while the SVD can be applied to arbitrary nonsquare matrices. This property is important for applications to signal transmission and control. I propose two new algorithms for iterative computation of the SVD given only sample inputs and outputs from a matrix. Although there currently exist many algorithms for Eigenvector Decomposition (Sanger1989, for example), these are the first true samplebased SVDalgorithms. 1 INTRODUCTION The Singular Value Decomposition (SVD) is a method for writing an arbitrary nons quare matrix as the product of two orthogonal matrices and a diagonal matrix.
A Learning Analog Neural Network Chip with Continuous-Time Recurrent Dynamics
The recurrent network,containing six continuous-time analog neurons and 42 free parameters (connection strengths and thresholds), is trained to generate time-varying outputs approximating given periodic signals presented to the network. The chip implements a stochastic perturbative algorithm,which observes the error gradient along random directions in the parameter space for error-descent learning. In addition tothe integrated learning functions and the generation of pseudo-random perturbations, the chip provides for teacher forcing andlong-term storage of the volatile parameters. The network learns a 1 kHz circular trajectory in 100 sec. The chip occupies 2mm x 2mm in a 2JLm CMOS process, and dissipates 1.2 mW. 1 Introduction Exact gradient-descent algorithms for supervised learning in dynamic recurrent networks [1-3]are fairly complex and do not provide for a scalable implementation in a standard 2-D VLSI process. We have implemented a fairly simple and scalable ·Present address: Johns Hopkins University, ECE Dept., Baltimore MD 21218-2686.
Encoding Labeled Graphs by Labeling RAAM
Alessandro Sperduti* Department of Computer Science Pisa University Corso Italia 40, 56125 Pisa, Italy Abstract In this paper we propose an extension to the RAAM by Pollack. This extension, the Labeling RAAM (LRAAM), can encode labeled graphswith cycles by representing pointers explicitly. Data encoded in an LRAAM can be accessed by pointer as well as by content. Direct access by content can be achieved by transforming theencoder network of the LRAAM into an analog Hopfield network with hidden units. Different access procedures can be defined depending on the access key.
Asynchronous Dynamics of Continuous Time Neural Networks
Wang, Xin, Li, Qingnan, Blum, Edward K.
Motivated by mathematical modeling, analog implementation and distributed simulation of neural networks, we present a definition of asynchronous dynamics of general CT dynamical systems defined by ordinary differential equations, based on notions of local times and communication times. We provide some preliminary results on globally asymptotical convergence of asynchronous dynamics for contractive and monotone CT dynamical systems. When applying theresults to neural networks, we obtain some conditions that ensure additive-type neural networks to be asynchronizable.