Not enough data to create a plot.
Try a different view from the menu above.
Sanger, Terence D.
Optimal Movement Primitives
Sanger, Terence D.
The theory of Optimal Unsupervised Motor Learning shows how a network can discover a reduced-order controller for an unknown nonlinear system by representing only the most significant modes. Here, I extend the theory to apply to command sequences, so that the most significant components discovered by the network correspond tomotion "primitives". Combinations of these primitives can be used to produce a wide variety of different movements. I demonstrate applications to human handwriting decomposition and synthesis, as well as to the analysis of electrophysiological experiments on movements resulting from stimulation of the frog spinal cord. 1 INTRODUCTION There is much debate within the neuroscience community concerning the internal representationof movement, and current neurophysiological investigations are aimed at uncovering these representations. In this paper, I propose a different approach that attempts to define the optimal internal representation in terms of "movement primitives", and I compare this representation with the observed behavior.
Optimal Movement Primitives
Sanger, Terence D.
The theory of Optimal Unsupervised Motor Learning shows how a network can discover a reduced-order controller for an unknown nonlinear system by representing only the most significant modes. Here, I extend the theory to apply to command sequences, so that the most significant components discovered by the network correspond to motion "primitives". Combinations of these primitives can be used to produce a wide variety of different movements. I demonstrate applications to human handwriting decomposition and synthesis, as well as to the analysis of electrophysiological experiments on movements resulting from stimulation of the frog spinal cord. 1 INTRODUCTION There is much debate within the neuroscience community concerning the internal representation of movement, and current neurophysiological investigations are aimed at uncovering these representations. In this paper, I propose a different approach that attempts to define the optimal internal representation in terms of "movement primitives", and I compare this representation with the observed behavior. In this way, we can make strong predictions about internal signal processing.
Optimal Unsupervised Motor Learning Predicts the Internal Representation of Barn Owl Head Movements
Sanger, Terence D.
Thisimplies the existence of a set of orthogonal internal coordinates thatare related to meaningful coordinates of the external world. No coherent computational theory has yet been proposed to explain this finding. I have proposed a simple model which provides aframework for a theory of low-level motor learning. I show that the theory predicts the observed microstimulation results in the barn owl. The model rests on the concept of "Optimal Un supervised Motor Learning", which provides a set of criteria that predict optimal internal representations. I describe two iterative Neural Network algorithms which find the optimal solution and demonstrate possible mechanisms for the development of internal representations in animals. 1 INTRODUCTION In the sensory domain, many algorithms for unsupervised learning have been proposed. Thesealgorithms learn depending on statistical properties of the input data, and often can be used to find useful "intermediate" sensory representations 614 Bam Owl Head Movements 615
Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples
Sanger, Terence D.
The Singular Value Decomposition (SVD) is an important tool for linear algebra and can be used to invert or approximate matrices. Although many authors use "SVD" synonymously with "Eigenvector Decomposition" or "Principal Components Transform", it is important to realize that these other methods apply only to symmetric matrices, while the SVD can be applied to arbitrary nonsquare matrices. This property is important for applications to signal transmission and control. I propose two new algorithms for iterative computation of the SVD given only sample inputs and outputs from a matrix. Although there currently exist many algorithms for Eigenvector Decomposition (Sanger 1989, for example), these are the first true samplebased SVD algorithms. 1 INTRODUCTION The Singular Value Decomposition (SVD) is a method for writing an arbitrary nons quare matrix as the product of two orthogonal matrices and a diagonal matrix.
Optimal Unsupervised Motor Learning Predicts the Internal Representation of Barn Owl Head Movements
Sanger, Terence D.
This implies the existence of a set of orthogonal internal coordinates that are related to meaningful coordinates of the external world. No coherent computational theory has yet been proposed to explain this finding. I have proposed a simple model which provides a framework for a theory of low-level motor learning. I show that the theory predicts the observed microstimulation results in the barn owl. The model rests on the concept of "Optimal U n supervised Motor Learning", which provides a set of criteria that predict optimal internal representations. I describe two iterative Neural Network algorithms which find the optimal solution and demonstrate possible mechanisms for the development of internal representations in animals. 1 INTRODUCTION In the sensory domain, many algorithms for unsupervised learning have been proposed. These algorithms learn depending on statistical properties of the input data, and often can be used to find useful "intermediate" sensory representations
Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples
Sanger, Terence D.
Terence D. Sanger Jet Propulsion Laboratory MS 303-310 4800 Oak Grove Drive Pasadena, CA 91109 Abstract The Singular Value Decomposition (SVD) is an important tool for linear algebra and can be used to invert or approximate matrices. Although many authors use "SVD" synonymously with "Eigenvector Decomposition"or "Principal Components Transform", it is important to realize that these other methods apply only to symmetric matrices, while the SVD can be applied to arbitrary nonsquare matrices. This property is important for applications to signal transmission and control. I propose two new algorithms for iterative computation of the SVD given only sample inputs and outputs from a matrix. Although there currently exist many algorithms for Eigenvector Decomposition (Sanger1989, for example), these are the first true samplebased SVDalgorithms. 1 INTRODUCTION The Singular Value Decomposition (SVD) is a method for writing an arbitrary nons quare matrix as the product of two orthogonal matrices and a diagonal matrix.
A Practice Strategy for Robot Learning Control
Sanger, Terence D.
The most general definition of Adaptive Control is one which includes any controller whose behavior changes in response to the controlled system's behavior. In practice, this definition is usually restricted to modifying a small number of controller parameters inorder to maintain system stability or global asymptotic stability of the errors during execution of a single trajectory (Sastry and Bodson 1989, for review). Learning Control represents a second level of operation, since it uses Adaptive Con-335 336 Sanger trol to modify parameters during repeated performance trials of a desired trajectory so that future trials result in greater accuracy (Arimoto et al. 1984). In this paper I present a third level called a "Practice Strategy", in which Learning Control is applied to a sequence of intermediate trajectories leading ultimately to the true desired trajectory. I claim that this can significantly increase learning speed and make learning possible for systems which would otherwise become unstable.
A Practice Strategy for Robot Learning Control
Sanger, Terence D.
"Trajectory Extension Learning" is a new technique for Learning Control in Robots which assumes that there exists some parameter of the desired trajectory that can be smoothly varied from a region of easy solvability of the dynamics to a region of desired behavior which may have more difficult dynamics. By gradually varying the parameter, practice movements remain near the desired path while a Neural Network learns to approximate the inverse dynamics. For example, the average speed of motion might be varied, and the inverse dynamics can be "bootstrapped" from slow movements with simpler dynamics to fast movements. This provides an example of the more general concept of a "Practice Strategy" in which a sequence of intermediate tasks is used to simplify learning a complex task. I show an example of the application of this idea to a real 2-joint direct drive robot arm.
Iterative Construction of Sparse Polynomial Approximations
Sanger, Terence D., Sutton, Richard S., Matheus, Christopher J.
We present an iterative algorithm for nonlinear regression based on construction of sparse polynomials. Polynomials are built sequentially from lower to higher order. Selection of new terms is accomplished using a novel look-ahead approach that predicts whether a variable contributes to the remaining error. The algorithm is based on the tree-growing heuristic in LMS Trees which we have extended to approximation of arbitrary polynomials of the input features. In addition, we provide a new theoretical justification for this heuristic approach.
Iterative Construction of Sparse Polynomial Approximations
Sanger, Terence D., Sutton, Richard S., Matheus, Christopher J.
We present an iterative algorithm for nonlinear regression based on construction of sparse polynomials. Polynomials are built sequentially from lower to higher order. Selection of new terms is accomplished using a novel look-ahead approach that predicts whether a variable contributes to the remaining error. The algorithm is based on the tree-growing heuristic in LMS Trees which we have extended to approximation of arbitrary polynomials of the input features. In addition, we provide a new theoretical justification for this heuristic approach.