Not enough data to create a plot.
Try a different view from the menu above.
Learnability and the Vapnik-Chervonenkis dimension
Blumer, A. | Ehrenfeucht, A. | Haussler, D. | Warmuth, M.
Valiant’s learnability model is extended to learning classes of concepts defined by regions in Euclidean space E”. The methods in this paper lead to a unified treatment of some of Valiant’s results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufftcient conditions are provided for feasible learnability.JACM, 36 (4), 929-65
Generalization of Back propagation to Recurrent and Higher Order Neural Networks
Fernando J. Pineda Applied Physics Laboratory, Johns Hopkins University Johns Hopkins Rd., Laurel MD 20707 Abstract A general method for deriving backpropagation algorithms for networks with recurrent and higher order networks is introduced. The propagation of activation in these networks is determined by dissipative differential equations. The error signal is backpropagated by integrating an associated differential equation. The method is introduced by applying it to the recurrent generalization of the feedforward backpropagation network. The method is extended to the case of higher order networks and to a constrained dynamical system for training a content addressable memory. The essential feature of the adaptive algorithms is that adaptive equation has a simple outer product form.