Agnostic Membership Query Learning with Nontrivial Savings: New Results, Techniques

Karchmer, Ari

arXiv.org Machine Learning 

Agnostic learning [Hau92, KSS92] is an important generalization of PAC-learning [Val84]. Agnostic learning is meant to more accurately capture a common approach to machine learning, where a predefined set of functions is explored in order to find the one achieving the least error on a set of data produced by some totally unknown process. Thus, roughly speaking, the objective of an agnostic learning algorithm for a complexity class Λ is to output a hypothesis h whose error in approximating an arbitrary concept is nearly as small as that of the best possible hypothesis within Λ. The class Λ is referred to as the touchstone class. Designing computationally efficient (i.e., polynomial time) agnostic learning algorithms for expressive touchstone classes has historically been relatively hard. Even extremely simple touchstone classes such as parity functions are believed to be computationally hard to learn in the agnostic model [BFKL93]. Some positive results exist, however, including for piecewise functions [KSS92], restricted fan-in two-layer neural nets [Lee96], geometric patterns [GKS97], decision trees, [GKK08], and halfspaces [KKMS08]. If we take some combination of the common relaxations considered in computational learning theory, such as access to membership queries, distribution-specific learning, or super-polynomial runtime, more positive results become known. For instance, the famed polynomial time agnostic learning algorithm for parity functions due to [GL89] (also referred to sometimes as the KM algorithm after [KM91]), uses membership queries and requires a uniform distribution over unlabelled examples.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found