Adversarially Robust Learning: A Generic Minimax Optimal Learner and Characterization
–Neural Information Processing Systems
We present a minimax optimal learner for the problem of learning predictors robust to adversarial examples at test-time. Interestingly, we find that this requires new algorithmic ideas and approaches to adversarially robust learning. In particular, we show, in a strong negative sense, the suboptimality of the robust learner proposed by [21] and a broader family of learners we identify as local learners. Our results are enabled by adopting a global perspective, specifically, through a key technical contribution: the global one-inclusion graph, which may be of independent interest, that generalizes the classical one-inclusion graph due to [17]. Finally, as a byproduct, we identify a dimension characterizing qualitatively and quantitatively what classes of predictors H are robustly learnable. This resolves an open problem due to [21], and closes a (potentially) infinite gap between the established upper and lower bounds on the sample complexity of adversarially robust learning.
Neural Information Processing Systems
Feb-10-2025, 20:35:20 GMT
- Country:
- North America
- Canada > British Columbia (0.28)
- United States > California (0.28)
- North America
- Genre:
- Research Report > New Finding (0.34)
- Technology: