Goto

Collaborating Authors

 Education


A Network of Localized Linear Discriminants

Neural Information Processing Systems

The localized linear discriminant network (LLDN) has been designed to address classification problems containing relatively closely spaced data from different classes (encounter zones [1], the accuracy problem [2]). Locally trained hyperplane segments are an effective way to define the decision boundaries for these regions [3]. The LLD uses a modified perceptron training algorithm for effective discovery of separating hyperplane/sigmoid units within narrow boundaries. The basic unit of the network is the discriminant receptive field (DRF) which combines the LLD function with Gaussians representing the dispersion of the local training data with respect to the hyperplane. The DRF implements a local distance measure [4], and obtains the benefits of networks oflocalized units [5]. A constructive algorithm for the two-class case is described which incorporates DRF's into the hidden layer to solve local discrimination problems. The output unit produces a smoothed, piecewise linear decision boundary. Preliminary results indicate the ability of the LLDN to efficiently achieve separation when boundaries are narrow and complex, in cases where both the "standard" multilayer perceptron (MLP) and k-nearest neighbor (KNN) yield high error rates on training data. 1 The LLD Training Algorithm and DRF Generation The LLD is defined by the hyperplane normal vector V and its "midpoint" M (a translated origin [1] near the center of gravity of the training data in feature space).


Green's Function Method for Fast On-Line Learning Algorithm of Recurrent Neural Networks

Neural Information Processing Systems

The two well known learning algorithms of recurrent neural networks are the back-propagation (Rumelhart & el al., Werbos) and the forward propagation (Williams and Zipser). The main drawback of back-propagation is its off-line backward path in time for error cumulation. This violates the online requirement in many practical applications. Although the forward propagation algorithm can be used in an online manner, the annoying drawback is the heavy computation load required to update the high dimensional sensitivity matrix (0( fir) operations for each time step). Therefore, to develop a fast forward algorithm is a challenging task.


Learning Unambiguous Reduced Sequence Descriptions

Neural Information Processing Systems

Do you want your neural net algorithm to learn sequences? Do not limit yourselfto conventional gradient descent (or approximations thereof). Instead, use your sequence learning algorithm (any will do) to implement the following method for history compression. No matter what your final goalsare, train a network to predict its next input from the previous ones. Since only unpredictable inputs convey new information, ignore all predictable inputs but let all unexpected inputs (plus information about the time step at which they occurred) become inputs to a higher-level network of the same kind (working on a slower, self-adjusting time scale). Go on building a hierarchy of such networks.


Learning How to Teach or Selecting Minimal Surface Data

Neural Information Processing Systems

Marques Pereira Dipartimento di Informatica Universita di Trento Via Inama 7, Trento, TN 38100 ITALY Abstract Learning a map from an input set to an output set is similar to the problem ofreconstructing hypersurfaces from sparse data (Poggio and Girosi, 1990). In this framework, we discuss the problem of automatically selecting "minimal"surface data. The objective is to be able to approximately reconstruct the surface from the selected sparse data. We show that this problem is equivalent to the one of compressing information by data removal andthe one oflearning how to teach. Our key step is to introduce a process that statistically selects the data according to the model.


Improving the Performance of Radial Basis Function Networks by Learning Center Locations

Neural Information Processing Systems

Three methods for improving the performance of (gaussian) radial basis function (RBF) networks were tested on the NETtaik task. In RBF, a new example is classified by computing its Euclidean distance to a set of centers chosen by unsupervised methods. The application of supervised learning to learn a non-Euclidean distance metric was found to reduce the error rate of RBF networks, while supervised learning of each center's variance resultedin inferior performance. The best improvement in accuracy was achieved by networks called generalized radial basis function (GRBF) networks. In GRBF, the center locations are determined by supervised learning. After training on 1000 words, RBF classifies 56.5% of letters correct, while GRBF scores 73.4% letters correct (on a separate test set). From these and other experiments, we conclude that supervised learning of center locations can be very important for radial basis function learning.


Allen Newell: A Remembrance

AI Magazine

I met Allen for the first time when I came for a two semester long visit to Carnegie Mellon University in 1968. This encounter was a distinct factor in my later decision to join the faculty at Carnegie Mellon University.


In Pursuit of Mind: The Research of Allen Newell

AI Magazine

Allen Newell was one of the founders and truly great scientists of AI. His contributions included foundational concepts and ground-breaking systems. His career was defined by the pursuit of a single, fundamental issue: the nature of the human mind. This article traces his pursuit from his early work on search and list processing in systems such as the LOGIC THEORIST and the GENERAL PROBLEM SOLVER; through his work on problem spaces, human problem solving, and production systems; through his final work on unified theories of cognition and SOAR.


AAAI News

AI Magazine

Books in the Innovative Applications of Artificial for the documents they want.


In Memorium

AI Magazine

Allen Newell, one of the founders of AI and cognitive science, died on July 19th, 1992.