Not enough data to create a plot.
Try a different view from the menu above.
Technology
Using Pairs of Data-Points to Define Splits for Decision Trees
Hinton, Geoffrey E., Revow, Michael
CART either split the data using axis-aligned hyperplanes or they perform a computationally expensive search in the continuous space of hyperplanes with unrestricted orientations. We show that the limitations of the former can be overcome without resorting to the latter. For every pair of training data-points, there is one hyperplane that is orthogonal to the line joining the data-points and bisects this line. Such hyperplanes are plausible candidates for splits. In a comparison on a suite of 12 datasets we found that this method of generating candidate splits outperformed the standard methods, particularly when the training sets were small. 1 Introduction Binary decision trees come in many flavours, but they all rely on splitting the set of k-dimensional data-points at each internal node into two disjoint sets.
Quadratic-Type Lyapunov Functions for Competitive Neural Networks with Different Time-Scales
The dynamics of complex neural networks modelling the selforganization process in cortical maps must include the aspects of long and short-term memory. The behaviour of the network is such characterized by an equation of neural activity as a fast phenomenon and an equation of synaptic modification as a slow part of the neural system. We present a quadratic-type Lyapunov function for the flow of a competitive neural system with fast and slow dynamic variables. We also show the consequences of the stability analysis on the neural net parameters. 1 INTRODUCTION This paper investigates a special class of laterally inhibited neural networks. In particular, we have examined the dynamics of a restricted class of laterally inhibited neural networks from a rigorous analytic standpoint.
Family Discovery
"Family discovery" is the task of learning the dimension and structure of a parameterized family of stochastic models. It is especially appropriate when the training examples are partitioned into "episodes" of samples drawn from a single parameter value. We present three family discovery algorithms based on surface learning and show that they significantly improve performance over two alternatives on a parameterized classification task.
A Multiscale Attentional Framework for Relaxation Neural Networks
Tsioutsias, Dimitris I., Mjolsness, Eric
Many practical problems in computer vision, pattern recognition, robotics and other areas can be described in terms of constrained optimization. In the past decade, researchers have proposed means of solving such problems with the use of neural networks [Hopfield & Tank, 1985; Koch et ai., 1986], which are thus derived as relaxation dynamics for the objective functions codifying the optimization task. One disturbing aspect of the approach soon became obvious, namely the apparent inability of the methods to scale up to practical problems, the principal reason being the rapid increase in the number of local minima present in the objectives as the dimension of the problem increases. Moreover most objectives, E(v), are highly nonlinear, non-convex functions of v, and simple techniques (e.g.
Generalized Learning Vector Quantization
We propose a new learning method, "Generalized Learning Vector Quantization (GLVQ)," in which reference vectors are updated based on the steepest descent method in order to minimize the cost function. The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus degrades recognition ability. Experimental results for printed Chinese character recognition reveal that GLVQ is superior to LVQ in recognition ability.