monotone conjunction
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- North America > Canada (0.04)
Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution
Dimitrios Diochnos, Saeed Mahloujifar, Mohammad Mahmoody
As the current literature contains multiple definitions of a dversarial risk and robustness, we start by giving a taxonomy for these definitions based on their direct goals; we identify one of them as the one guaranteeing miscla ssification by pushing the instances to the error region . We then study some classic algorithms for learning monotone conjunctions and compare their adversar ial robustness under different definitions by attacking the hypotheses using ins tances drawn from the uniform distribution. We observe that sometimes these defin itions lead to significantly different bounds. Thus, this study advocates for the use of the error-r egion definition, even though other definitions, in other contexts with context-dependent assumptions, may coincide with the error-region definition .
- North America > United States > Virginia (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > California > Santa Clara County > San Jose (0.04)
- (3 more...)
Adversarial Risk and Robustness: General Definitions and Implications for the Uniform Distribution
Dimitrios Diochnos, Saeed Mahloujifar, Mohammad Mahmoody
As the current literature contains multiple definitions of a dversarial risk and robustness, we start by giving a taxonomy for these definitions based on their direct goals; we identify one of them as the one guaranteeing miscla ssification by pushing the instances to the error region . We then study some classic algorithms for learning monotone conjunctions and compare their adversar ial robustness under different definitions by attacking the hypotheses using ins tances drawn from the uniform distribution. We observe that sometimes these defin itions lead to significantly different bounds. Thus, this study advocates for the use of the error-r egion definition, even though other definitions, in other contexts with context-dependent assumptions, may coincide with the error-region definition .
- North America > United States > Virginia (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > California > Santa Clara County > San Jose (0.04)
- (3 more...)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.92)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Computational Learning Theory (0.48)
Sample Complexity of Robust Learning against Evasion Attacks
It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. One of the fundamental problems in adversarial machine learning is to quantify how much training data is needed in the presence of evasion attacks, where data is corrupted at test time. In this thesis, we work with the exact-in-the-ball notion of robustness and study the feasibility of adversarially robust learning from the perspective of learning theory, considering sample complexity. We first explore the setting where the learner has access to random examples only, and show that distributional assumptions are essential. We then focus on learning problems with distributions on the input data that satisfy a Lipschitz condition and show that robustly learning monotone conjunctions has sample complexity at least exponential in the adversary's budget (the maximum number of bits it can perturb on each input). However, if the adversary is restricted to perturbing $O(\log n)$ bits, then one can robustly learn conjunctions and decision lists w.r.t. log-Lipschitz distributions. We then study learning models where the learner is given more power. We first consider local membership queries, where the learner can query the label of points near the training sample. We show that, under the uniform distribution, the exponential dependence on the adversary's budget to robustly learn conjunctions remains inevitable. We then introduce a local equivalence query oracle, which returns whether the hypothesis and target concept agree in a given region around a point in the training sample, and a counterexample if it exists. We show that if the query radius is equal to the adversary's budget, we can develop robust empirical risk minimization algorithms in the distribution-free setting. We give general query complexity upper and lower bounds, as well as for concrete concept classes.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.13)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- (8 more...)
- Overview (0.92)
- Research Report > New Finding (0.67)
- Information Technology > Security & Privacy (1.00)
- Education (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Logic & Formal Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Computational Learning Theory (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.45)
Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks
Gourdeau, Pascale, Kanade, Varun, Kwiatkowska, Marta, Worrell, James
A fundamental problem in adversarial machine learning is to quantify how much training data is needed in the presence of evasion attacks. In this paper we address this issue within the framework of PAC learning, focusing on the class of decision lists. Given that distributional assumptions are essential in the adversarial setting, we work with probability distributions on the input data that satisfy a Lipschitz condition: nearby points have similar probability. Our key results illustrate that the adversary's budget (that is, the number of bits it can perturb on each input) is a fundamental quantity in determining the sample complexity of robust learning. Our first main result is a sample-complexity lower bound: the class of monotone conjunctions (essentially the simplest non-trivial hypothesis class on the Boolean hypercube) and any superclass has sample complexity at least exponential in the adversary's budget. Our second main result is a corresponding upper bound: for every fixed $k$ the class of $k$-decision lists has polynomial sample complexity against a $\log(n)$-bounded adversary. This sheds further light on the question of whether an efficient PAC learning algorithm can always be used as an efficient $\log(n)$-robust learning algorithm under the uniform distribution.
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
On the Evolvability of Monotone Conjunctions with an Evolutionary Mutation Mechanism
Valiant (2009) introduced a framework for a quantitative approach to evolution, called evolvability. The idea is, roughly, that there is an ideal behavior in every environment and the feedback that the various organisms receive during evolution indicates how close their behavior is to ideal. Ultimately, evolvability aims at modeling and explaining mechanisms that allow near-optimal behavior of organisms while exploiting realistic computational resources. Due to a result by Feldman (2008), evolvability is equivalent to learning in the correlational statistical query (CSQ) model (Bshouty & Feldman, 2002). Thus, evolvability algorithms correspond to a special type of local search learning algorithms that fall under the umbrella of the probably approximately correct (PAC) model of learning (Valiant, 1984).
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > United Kingdom > Wales > Ceredigion > Aberystwyth (0.04)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- (15 more...)
On the Hardness of Robust Classification
Gourdeau, Pascale, Kanade, Varun, Kwiatkowska, Marta, Worrell, James
It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. In this paper we study the feasibility of robust learning from the perspective of computational learning theory, considering both sample and computational complexity. In particular, our definition of robust learnability requires polynomial sample complexity. We start with two negative results. We show that no non-trivial concept class can be robustly learned in the distribution-free setting against an adversary who can perturb just a single input bit. We show moreover that the class of monotone conjunctions cannot be robustly learned under the uniform distribution against an adversary who can perturb $\omega(\log n)$ input bits. However if the adversary is restricted to perturbing $O(\log n)$ bits, then the class of monotone conjunctions can be robustly learned with respect to a general class of distributions (that includes the uniform distribution). Finally, we provide a simple proof of the computational hardness of robust learning on the boolean hypercube. Unlike previous results of this nature, our result does not rely on another computational model (e.g. the statistical query model) nor on any hardness assumption other than the existence of a hard learning problem in the PAC framework.