Goto

Collaborating Authors

 Sanger, Terence D.


Iterative Construction of Sparse Polynomial Approximations

Neural Information Processing Systems

We present an iterative algorithm for nonlinear regression based on construction ofsparse polynomials. Polynomials are built sequentially from lower to higher order. Selection of new terms is accomplished using a novel look-ahead approach that predicts whether a variable contributes to the remaining error. The algorithm is based on the tree-growing heuristic in LMS Trees which we have extended to approximation of arbitrary polynomials ofthe input features. In addition, we provide a new theoretical justification for this heuristic approach.


Basis-Function Trees as a Generalization of Local Variable Selection Methods for Function Approximation

Neural Information Processing Systems

Function approximation on high-dimensional spaces is often thwarted by a lack of sufficient data to adequately "fill" the space, or lack of sufficient computational resources. The technique of local variable selection provides a partial solution to these problems by attempting to approximate functions locally using fewer than the complete set of input dimensions.


Basis-Function Trees as a Generalization of Local Variable Selection Methods for Function Approximation

Neural Information Processing Systems

Function approximation on high-dimensional spaces is often thwarted by a lack of sufficient data to adequately "fill" the space, or lack of sufficient computational resources. The technique of local variable selection provides a partial solution to these problems by attempting to approximate functions locally using fewer than the complete set of input dimensions.


An Optimality Principle for Unsupervised Learning

Neural Information Processing Systems

We propose an optimality principle for training an unsupervised feedforward neural network based upon maximal ability to reconstruct the input data from the network outputs. We describe an algorithm which can be used to train either linear or nonlinear networks with certain types of nonlinearity. Examples of applications to the problems of image coding, feature detection, and analysis of randomdot stereograms are presented.


An Optimality Principle for Unsupervised Learning

Neural Information Processing Systems

We propose an optimality principle for training an unsupervised feedforwardneural network based upon maximal ability to reconstruct the input data from the network outputs. Wedescribe an algorithm which can be used to train either linear or nonlinear networks with certain types of nonlinearity. Examples of applications to the problems of image coding, feature detection, and analysis of randomdot stereogramsare presented.