Country
Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples
Terence D. Sanger Jet Propulsion Laboratory MS 303-310 4800 Oak Grove Drive Pasadena, CA 91109 Abstract The Singular Value Decomposition (SVD) is an important tool for linear algebra and can be used to invert or approximate matrices. Although many authors use "SVD" synonymously with "Eigenvector Decomposition"or "Principal Components Transform", it is important to realize that these other methods apply only to symmetric matrices, while the SVD can be applied to arbitrary nonsquare matrices. This property is important for applications to signal transmission and control. I propose two new algorithms for iterative computation of the SVD given only sample inputs and outputs from a matrix. Although there currently exist many algorithms for Eigenvector Decomposition (Sanger1989, for example), these are the first true samplebased SVDalgorithms. 1 INTRODUCTION The Singular Value Decomposition (SVD) is a method for writing an arbitrary nons quare matrix as the product of two orthogonal matrices and a diagonal matrix.
A Learning Analog Neural Network Chip with Continuous-Time Recurrent Dynamics
The recurrent network,containing six continuous-time analog neurons and 42 free parameters (connection strengths and thresholds), is trained to generate time-varying outputs approximating given periodic signals presented to the network. The chip implements a stochastic perturbative algorithm,which observes the error gradient along random directions in the parameter space for error-descent learning. In addition tothe integrated learning functions and the generation of pseudo-random perturbations, the chip provides for teacher forcing andlong-term storage of the volatile parameters. The network learns a 1 kHz circular trajectory in 100 sec. The chip occupies 2mm x 2mm in a 2JLm CMOS process, and dissipates 1.2 mW. 1 Introduction Exact gradient-descent algorithms for supervised learning in dynamic recurrent networks [1-3]are fairly complex and do not provide for a scalable implementation in a standard 2-D VLSI process. We have implemented a fairly simple and scalable ยทPresent address: Johns Hopkins University, ECE Dept., Baltimore MD 21218-2686.
Encoding Labeled Graphs by Labeling RAAM
Alessandro Sperduti* Department of Computer Science Pisa University Corso Italia 40, 56125 Pisa, Italy Abstract In this paper we propose an extension to the RAAM by Pollack. This extension, the Labeling RAAM (LRAAM), can encode labeled graphswith cycles by representing pointers explicitly. Data encoded in an LRAAM can be accessed by pointer as well as by content. Direct access by content can be achieved by transforming theencoder network of the LRAAM into an analog Hopfield network with hidden units. Different access procedures can be defined depending on the access key.
Asynchronous Dynamics of Continuous Time Neural Networks
Wang, Xin, Li, Qingnan, Blum, Edward K.
Motivated by mathematical modeling, analog implementation and distributed simulation of neural networks, we present a definition of asynchronous dynamics of general CT dynamical systems defined by ordinary differential equations, based on notions of local times and communication times. We provide some preliminary results on globally asymptotical convergence of asynchronous dynamics for contractive and monotone CT dynamical systems. When applying theresults to neural networks, we obtain some conditions that ensure additive-type neural networks to be asynchronizable.
Locally Adaptive Nearest Neighbor Algorithms
Wettschereck, Dietrich, Dietterich, Thomas G.
Four versions of a k-nearest neighbor algorithm with locally adaptive kare introduced and compared to the basic k-nearest neighbor algorithm (kNN). Locally adaptive kNN algorithms choose the value of k that should be used to classify a query by consulting the results of cross-validation computations in the local neighborhood of the query. Local kNN methods are shown to perform similar to kNN in experiments with twelve commonly used data sets. Encouraging resultsin three constructed tasks show that local methods can significantly outperform kNN in specific applications. Local methods can be recommended for online learning and for applications wheredifferent regions of the input space are covered by patterns solving different sub-tasks.
Learning Classification with Unlabeled Data
Department of Computer Science University of Rochester Rochester, NY 14627 Abstract One of the advantages of supervised learning is that the final error metric isavailable during training. For classifiers, the algorithm can directly reduce the number of misclassifications on the training set. Unfortunately, whenmodeling human learning or constructing classifiers for autonomous robots,supervisory labels are often not available or too expensive. In this paper we show that we can substitute for the labels by making use of structure between the pattern distributions to different sensory modalities.We show that minimizing the disagreement between the outputs of networks processing patterns from these different modalities is a sensible approximation to minimizing the number of misclassifications in each modality, and leads to similar results. Using the Peterson-Barney vowel dataset we show that the algorithm performs well in finding appropriate placementfor the codebook vectors particularly when the confuseable classes are different for the two modalities. 1 INTRODUCTION This paper addresses the question of how a human or autonomous robot can learn to classify new objects without experience with previous labeled examples.
Feature Densities are Required for Computing Feature Correspondences
The feature correspondence problem is a classic hurdle in visual object-recognition concerned with determining the correct mapping between the features measured from the image and the features expected bythe model. In this paper we show that determining good correspondences requires information about the joint probability density over the image features. We propose "likelihood based correspondence matching" as a general principle for selecting optimal correspondences.The approach is applicable to nonrigid models, allows nonlinear perspective transformations, and can optimally dealwith occlusions and missing features.
Neural Network Methods for Optimization Problems
In a talk entitled "Trajectory Control of Convergent Networks with applications to TSP", Natan Peterfreund (Computer Science, Technion) dealt with the problem of controlling the trajectories of continuous convergent neural networks models for solving optimization problems, without affecting their equilibria set and their convergence properties.Natan presented a class of feedback control functions which achieve this objective, while also improving the convergence rates. A modified Hopfield andTank neural network model, developed through the proposed feedback approach, was found to substantially improve the results of the original model in solving the Traveling Salesman Problem. The proposed feedback overcame the 2n symmetric property of the TSP problem. In a talk entitled "Training Feedforward Neural Networks quickly and accurately using Very Fast Simulated Reannealing Methods", Bruce Rosen (Asst. Professor, Computer Science, UT San Antonio) presented the Very Fast Simulated Reannealing (VFSR)algorithm for training feedforward neural networks [2].