Wettschereck, Dietrich
Locally Adaptive Nearest Neighbor Algorithms
Wettschereck, Dietrich, Dietterich, Thomas G.
Four versions of a k-nearest neighbor algorithm with locally adaptive k are introduced and compared to the basic k-nearest neighbor algorithm (kNN). Locally adaptive kNN algorithms choose the value of k that should be used to classify a query by consulting the results of cross-validation computations in the local neighborhood of the query. Local kNN methods are shown to perform similar to kNN in experiments with twelve commonly used data sets. Encouraging results in three constructed tasks show that local methods can significantly outperform kNN in specific applications. Local methods can be recommended for online learning and for applications where different regions of the input space are covered by patterns solving different sub-tasks.
Memory-Based Methods for Regression and Classification
Dietterich, Thomas G., Wettschereck, Dietrich, Atkeson, Chris G., Moore, Andrew W.
Moore School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Memory-based learning methods operate by storing all (or most) of the training data and deferring analysis of that data until "run time" (i.e., when a query is presented and a decision or prediction must be made). When a query is received, these methods generally answer the query by retrieving and analyzing a small subset of the training data-namely, data in the immediate neighborhood of the query point. In short, memory-based methods are "lazy" (they wait until the query) and "local" (they use only a local neighborhood). The purpose of this workshop was to review the state-of-the-art in memory-based methods and to understand their relationship to "eager" and "global" learning algorithms such as batch backpropagation. There are two essential components to any memory-based algorithm: the method for defining the "local neighborhood" and the learning method that is applied to the training examples in the local neighborhood.
Memory-Based Methods for Regression and Classification
Dietterich, Thomas G., Wettschereck, Dietrich, Atkeson, Chris G., Moore, Andrew W.
Memory-based learning methods operate by storing all (or most) of the training data and deferring analysis of that data until "run time" (i.e., when a query is presented and a decision or prediction must be made). When a query is received, these methods generally answer the query by retrieving and analyzing a small subset of the training data-namely, data in the immediate neighborhood of the query point. In short, memory-based methods are "lazy" (they wait until the query) and "local" (they use only a local neighborhood). The purpose of this workshop was to review the state-of-the-art in memory-based methods and to understand their relationship to "eager" and "global" learning algorithms such as batch backpropagation. There are two essential components to any memory-based algorithm: the method for defining the "local neighborhood" and the learning method that is applied to the training examples in the local neighborhood.
Locally Adaptive Nearest Neighbor Algorithms
Wettschereck, Dietrich, Dietterich, Thomas G.
Four versions of a k-nearest neighbor algorithm with locally adaptive kare introduced and compared to the basic k-nearest neighbor algorithm (kNN). Locally adaptive kNN algorithms choose the value of k that should be used to classify a query by consulting the results of cross-validation computations in the local neighborhood of the query. Local kNN methods are shown to perform similar to kNN in experiments with twelve commonly used data sets. Encouraging resultsin three constructed tasks show that local methods can significantly outperform kNN in specific applications. Local methods can be recommended for online learning and for applications wheredifferent regions of the input space are covered by patterns solving different sub-tasks.
Improving the Performance of Radial Basis Function Networks by Learning Center Locations
Wettschereck, Dietrich, Dietterich, Thomas
Three methods for improving the performance of (gaussian) radial basis function (RBF) networks were tested on the NETtaik task. In RBF, a new example is classified by computing its Euclidean distance to a set of centers chosen by unsupervised methods. The application of supervised learning to learn a non-Euclidean distance metric was found to reduce the error rate of RBF networks, while supervised learning of each center's variance resulted in inferior performance. The best improvement in accuracy was achieved by networks called generalized radial basis function (GRBF) networks. In GRBF, the center locations are determined by supervised learning. After training on 1000 words, RBF classifies 56.5% of letters correct, while GRBF scores 73.4% letters correct (on a separate test set). From these and other experiments, we conclude that supervised learning of center locations can be very important for radial basis function learning.
Improving the Performance of Radial Basis Function Networks by Learning Center Locations
Wettschereck, Dietrich, Dietterich, Thomas
Three methods for improving the performance of (gaussian) radial basis function (RBF) networks were tested on the NETtaik task. In RBF, a new example is classified by computing its Euclidean distance to a set of centers chosen by unsupervised methods. The application of supervised learning to learn a non-Euclidean distance metric was found to reduce the error rate of RBF networks, while supervised learning of each center's variance resulted in inferior performance. The best improvement in accuracy was achieved by networks called generalized radial basis function (GRBF) networks. In GRBF, the center locations are determined by supervised learning. After training on 1000 words, RBF classifies 56.5% of letters correct, while GRBF scores 73.4% letters correct (on a separate test set). From these and other experiments, we conclude that supervised learning of center locations can be very important for radial basis function learning.
Improving the Performance of Radial Basis Function Networks by Learning Center Locations
Wettschereck, Dietrich, Dietterich, Thomas
Three methods for improving the performance of (gaussian) radial basis function (RBF) networks were tested on the NETtaik task. In RBF, a new example is classified by computing its Euclidean distance to a set of centers chosen by unsupervised methods. The application of supervised learning to learn a non-Euclidean distance metric was found to reduce the error rate of RBF networks, while supervised learning of each center's variance resultedin inferior performance. The best improvement in accuracy was achieved by networks called generalized radial basis function (GRBF) networks. In GRBF, the center locations are determined by supervised learning. After training on 1000 words, RBF classifies 56.5% of letters correct, while GRBF scores 73.4% letters correct (on a separate test set). From these and other experiments, we conclude that supervised learning of center locations can be very important for radial basis function learning.