Efficient Testable Learning of Halfspaces with Adversarial Label Noise

Diakonikolas, Ilias, Kane, Daniel M., Kontonis, Vasilis, Liu, Sihan, Zarifis, Nikos

arXiv.org Artificial Intelligence 

Learning halfspaces from random labeled examples is a classical task in machine learning, with history going back to the Perceptron algorithm [Ros58]. In the realizable PAC model [Val84] (i.e., with consistent labels), the class of halfspaces is known to be efficiently learnable without distributional assumptions. On the other hand, in the agnostic (or adversarial label noise) model [Hau92, KSS94] even weak learning is computationally intractable in the distribution-free setting [Dan16, DKMR22, Tie22]. These intractability results have served as a motivation for the study of agnostic learning in the distribution-specific setting, i.e., when the marginal distribution on examples is assumed to be wellbehaved. In this context, a number of algorithmic results are known.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found