Meunier, Laurent
On the Role of Randomization in Adversarially Robust Classification
Gnecco-Heredia, Lucas, Chevaleyre, Yann, Negrevergne, Benjamin, Meunier, Laurent, Pydi, Muni Sreenivas
Deep neural networks are known to be vulnerable to small adversarial perturbations in test data. To defend against adversarial attacks, probabilistic classifiers have been proposed as an alternative to deterministic ones. However, literature has conflicting findings on the effectiveness of probabilistic classifiers in comparison to deterministic ones. In this paper, we clarify the role of randomization in building adversarially robust classifiers. Given a base hypothesis set of deterministic classifiers, we show the conditions under which a randomized ensemble outperforms the hypothesis set in adversarial risk, extending previous results. Additionally, we show that for any probabilistic binary classifier (including randomized ensembles), there exists a deterministic classifier that outperforms it. Finally, we give an explicit description of the deterministic hypothesis set that contains such a deterministic classifier for many types of commonly used probabilistic classifiers, i.e. randomized ensembles and parametric/input noise injection.
An $\ell^p$-based Kernel Conditional Independence Test
Scetbon, Meyer, Meunier, Laurent, Romano, Yaniv
We propose a new computationally efficient test for conditional independence based on the $L^{p}$ distance between two kernel-based representatives of well suited distributions. By evaluating the difference of these two representatives at a finite set of locations, we derive a finite dimensional approximation of the $L^{p}$ metric, obtain its asymptotic distribution under the null hypothesis of conditional independence and design a simple statistical test from it. The test obtained is consistent and computationally efficient. We conduct a series of experiments showing that the performance of our new tests outperforms state-of-the-art methods both in term of statistical power and type-I error even in the high dimensional setting.
Equitable and Optimal Transport with Multiple Agents
Scetbon, Meyer, Meunier, Laurent, Atif, Jamal, Cuturi, Marco
We introduce an extension of the Optimal Transport problem when multiple costs are involved. Considering each cost as an agent, we aim to share equally between agents the work of transporting one distribution to another. To do so, we minimize the transportation cost of the agent who works the most. Another point of view is when the goal is to partition equitably goods between agents according to their heterogeneous preferences. Here we aim to maximize the utility of the least advantaged agent. This is a fair division problem. Like Optimal Transport, the problem can be cast as a linear optimization problem. When there is only one agent, we recover the Optimal Transport problem. When two agents are considered, we are able to recover Integral Probability Metrics defined by $\alpha$-H\"older functions, which include the widely-known Dudley metric. To the best of our knowledge, this is the first time a link is given between the Dudley metric and Optimal Transport. We provide an entropic regularization of that problem which leads to an alternative algorithm faster than the standard linear program.
On averaging the best samples in evolutionary computation
Meunier, Laurent, Chevaleyre, Yann, Rapin, Jeremy, Royer, Clément W., Teytaud, Olivier
Choosing the right selection rate is a long standing issue in evolutionary computation. In the continuous unconstrained case, we prove mathematically that a single parent $\mu=1$ leads to a sub-optimal simple regret in the case of the sphere function. We provide a theoretically-based selection rate $\mu/\lambda$ that leads to better progress rates. With our choice of selection rate, we get a provable regret of order $O(\lambda^{-1})$ which has to be compared with $O(\lambda^{-2/d})$ in the case where $\mu=1$. We complete our study with experiments to confirm our theoretical claims.
Adversarial Attacks on Linear Contextual Bandits
Garcelon, Evrard, Roziere, Baptiste, Meunier, Laurent, Tarbouriech, Jean, Teytaud, Olivier, Lazaric, Alessandro, Pirotta, Matteo
Contextual bandit algorithms are applied in a wide range of domains, from advertising to recommender systems, from clinical trials to education. In many of these domains, malicious agents may have incentives to attack the bandit algorithm to induce it to perform a desired behavior. For instance, an unscrupulous ad publisher may try to increase their own revenue at the expense of the advertisers; a seller may want to increase the exposure of their products, or thwart a competitor's advertising campaign. In this paper, we study several attack scenarios and show that a malicious agent can force a linear contextual bandit algorithm to pull any desired arm $T - o(T)$ times over a horizon of $T$ steps, while applying adversarial modifications to either rewards or contexts that only grow logarithmically as $O(\log T)$. We also investigate the case when a malicious agent is interested in affecting the behavior of the bandit algorithm in a single context (e.g., a specific user). We first provide sufficient conditions for the feasibility of the attack and we then propose an efficient algorithm to perform the attack. We validate our theoretical results on experiments performed on both synthetic and real-world datasets.
Robust Neural Networks using Randomized Adversarial Training
Araujo, Alexandre, Pinot, Rafael, Negrevergne, Benjamin, Meunier, Laurent, Chevaleyre, Yann, Yger, Florian, Atif, Jamal
Since the discovery of adversarial examples in machine learning, researchers have designed several techniques to train neural networks that are robust against different types of attacks (most notably $\ell_\infty$ and $\ell_2$ based attacks). However, it has been observed that the defense mechanisms designed to protect against one type of attack often offer poor performance against the other. In this paper, we introduce Randomized Adversarial Training (RAT), a technique that is efficient both against $\ell_2$ and $\ell_\infty$ attacks. To obtain this result, we build upon adversarial training, a technique that is efficient against $\ell_\infty$ attacks, and demonstrate that adding random noise at training and inference time further improves performance against \ltwo attacks. We then show that RAT is as efficient as adversarial training against $\ell_\infty$ attacks while being robust against strong $\ell_2$ attacks. Our final comparative experiments demonstrate that RAT outperforms all state-of-the-art approaches against $\ell_2$ and $\ell_\infty$ attacks.
Theoretical evidence for adversarial robustness through randomization: the case of the Exponential family
Pinot, Rafael, Meunier, Laurent, Araujo, Alexandre, Kashima, Hisashi, Yger, Florian, Gouy-Pailler, Cédric, Atif, Jamal
This paper investigates the theory of robustness against adversarial attacks. It focuses on the family of randomization techniques that consist in injecting noise in the network at inference time. These techniques have proven effective in many contexts, but lack theoretical arguments. We close this gap by presenting a theoretical analysis of these approaches, hence explaining why they perform well in practice. More precisely, we provide the first result relating the randomization rate to robustness to adversarial attacks. This result applies for the general family of exponential distributions, and thus extends and unifies the previous approaches. We support our theoretical claims with a set of experiments.