Haussler, D.


Learnability and the Vapnik- Chervonenkis dimension

Classics

We also give an example in a Boolean domain where this result gives significantly better bounds on the sample complexity than the simpler counting arguments used in [lo] (Example 2.4). These two types of learning algorithms lead to two notions of feasible (but nonuniform) learnability for concept classes: polynomial learnability with respect to domain dimension, in which the sample size is allowed to grow polynomially in the Euclidean dimension of the domain, and polynomial learnability with respect to target complexity, in which the sample size is allowed to grow polynomially in the complexity of the target concept. In Section 3.2 we give a sufficient condition for polynomial learnability with respect to target complexity based on the principle of preferring the simpler hypothesis, usually called Occam's Razor (Theorem 3.2.1). We use this result to study learning in concept classes that are formed.by For example, convex polygons are defined by finite intersections of half-planes.


Learning conjunctive concepts in structural domains

Classics

We study the problem of learning conjunctive concepts from examples on structural domains like the blocks world. We demonstrate how this result affects the feasibility of Mitchell's version of space approach and how it shows that it is unlikely that this class of concepts is polynomially learnable from random examples alone in the PAC framework of Valiant. On the other hand, we show that for any fixed bound on the number of objects per scene, this class is polynomially learnable if, in addition to providing random examples, we allow the learning algorithm to make subset queries. In establishing this result, we calculate the capacity of the hypothesis space of conjunctive concepts in a structural domain and use a general theorem of Vapnik and Chervonenkis.


Quantifying inductive bias: AI learning algorithms and Valiant’s learning framework

Classics

We show that the notion of inductive bias in concept learning can be quantified in a way that directly relates to learning performance in the framework recently introduced by Valiant. Using these bias measurements we analyze the performance of the classical learning algorithm for conjunctive concepts from the perspective of Valiant's learning framework. Improved learning algorithms are also developed for conjunctive concepts with internal disjunction, k-DNF and k-CNF concepts. We show that all our algorithms are within a logarithmic factor of optimal in terms of the number of examples they require to achieve a given level of learning performance in the Valiant framework.