Not enough data to create a plot.
Try a different view from the menu above.
Technology
A Realizable Learning Task which Exhibits Overfitting
In this paper we examine a perceptron learning task. The task is realizable since it is provided by another perceptron with identical architecture. Both perceptrons have nonlinear sigmoid output functions. The gain of the output function determines the level of nonlinearity of the learning task. It is observed that a high level of nonlinearity leads to overfitting. We give an explanation for this rather surprising observation and develop a method to avoid the overfitting. This method has two possible interpretations, one is learning with noise, the other cross-validated early stopping.
Stock Selection via Nonlinear Multi-Factor Models
This paper discusses the use of multilayer feed forward neural networks for predicting a stock's excess return based on its exposure to various technical and fundamental factors. To demonstrate the effectiveness of the approach a hedged portfolio which consists of equally capitalized long and short positions is constructed and its historical returns are benchmarked against T-bill returns and the S&P500 index. 1 Introduction
Examples of learning curves from a modified VC-formalism
Kowalczyk, Adam, Szymanski, Jacek, Bartlett, Peter L., Williamson, Robert C.
We examine the issue of evaluation of model specific parameters in a modified VC-formalism. Two examples are analyzed: the 2-dimensional homogeneous perceptron and the I-dimensional higher order neuron. Both models are solved theoretically, and their learning curves are compared against true learning curves. It is shown that the formalism has the potential to generate a variety of learning curves, including ones displaying ''phase transitions."
Some results on convergent unlearning algorithm
Semenov, Serguei A., Shuvalova, Irina B.
In the past years the unsupervised learning schemes arose strong interest among researchers but for the time being a little is known about underlying learning mechanisms, as well as still less rigorous results like convergence theorems were obtained in this field. One of promising concepts along this line is so called "unlearning" for the Hopfield-type neural networks (Hopfield et ai, 1983, van Hemmen & Klemmer, 1992, Wimbauer et ai, 1994). Elaborating that elegant ideas the convergent unlearning algorithm has recently been proposed (Plakhov & Semenov, 1994), executing without patterns presentation. It is aimed at to correct initial Hebbian connectivity in order to provide extensive storage of arbitrary correlated data. This algorithm is stated as follows. Pick up at iteration step m, m 0,1,2,... a random network state s(m)
Using Pairs of Data-Points to Define Splits for Decision Trees
Hinton, Geoffrey E., Revow, Michael
CART either split the data using axis-aligned hyperplanes or they perform a computationally expensive search in the continuous space of hyperplanes with unrestricted orientations. We show that the limitations of the former can be overcome without resorting to the latter. For every pair of training data-points, there is one hyperplane that is orthogonal to the line joining the data-points and bisects this line. Such hyperplanes are plausible candidates for splits. In a comparison on a suite of 12 datasets we found that this method of generating candidate splits outperformed the standard methods, particularly when the training sets were small. 1 Introduction Binary decision trees come in many flavours, but they all rely on splitting the set of k-dimensional data-points at each internal node into two disjoint sets.
Quadratic-Type Lyapunov Functions for Competitive Neural Networks with Different Time-Scales
The dynamics of complex neural networks modelling the selforganization process in cortical maps must include the aspects of long and short-term memory. The behaviour of the network is such characterized by an equation of neural activity as a fast phenomenon and an equation of synaptic modification as a slow part of the neural system. We present a quadratic-type Lyapunov function for the flow of a competitive neural system with fast and slow dynamic variables. We also show the consequences of the stability analysis on the neural net parameters. 1 INTRODUCTION This paper investigates a special class of laterally inhibited neural networks. In particular, we have examined the dynamics of a restricted class of laterally inhibited neural networks from a rigorous analytic standpoint.
Family Discovery
"Family discovery" is the task of learning the dimension and structure of a parameterized family of stochastic models. It is especially appropriate when the training examples are partitioned into "episodes" of samples drawn from a single parameter value. We present three family discovery algorithms based on surface learning and show that they significantly improve performance over two alternatives on a parameterized classification task.