efficient adversarial attack
Efficient Adversarial Attacks on Online Multi-agent Reinforcement Learning
Due to the broad range of applications of multi-agent reinforcement learning (MARL), understanding the effects of adversarial attacks against MARL model is essential for the safe applications of this model. Motivated by this, we investigate the impact of adversarial attacks on MARL. In the considered setup, there is an exogenous attacker who is able to modify the rewards before the agents receive them or manipulate the actions before the environment receives them. The attacker aims to guide each agent into a target policy or maximize the cumulative rewards under some specific reward function chosen by the attacker, while minimizing the amount of the manipulation on feedback and action. We first show the limitations of the action poisoning only attacks and the reward poisoning only attacks. We then introduce a mixed attack strategy with both the action poisoning and reward poisoning. We show that the mixed attack strategy can efficiently attack MARL agents even if the attacker has no prior information about the underlying environment and the agents' algorithms.
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
An Efficient Adversarial Attack for Tree Ensembles
We study the problem of efficient adversarial attacks on tree based ensembles such as gradient boosting decision trees (GBDTs) and random forests (RFs). Since these models are non-continuous step functions and gradient does not exist, most existing efficient adversarial attacks are not applicable. Although decision-based black-box attacks can be applied, they cannot utilize the special structure of trees. In our work, we transform the attack problem into a discrete search problem specially designed for tree ensembles, where the goal is to find a valid ``leaf tuple'' that leads to mis-classification while having the shortest distance to the original input. With this formulation, we show that a simple yet effective greedy algorithm can be applied to iteratively optimize the adversarial example by moving the leaf tuple to its neighborhood within hamming distance 1. Experimental results on several large GBDT and RF models with up to hundreds of trees demonstrate that our method can be thousands of times faster than the previous mixed-integer linear programming (MILP) based approach, while also providing smaller (better) adversarial examples than decision-based black-box attacks on general $\ell_p$ ($p=1, 2, \infty$) norm perturbations.
- Information Technology > Security & Privacy (0.90)
- Government > Military (0.90)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (0.60)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Mathematical & Statistical Methods (0.60)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.60)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.60)
Review for NeurIPS paper: An Efficient Adversarial Attack for Tree Ensembles
Weaknesses: The biggest variable to this approach seems to be the size of the neighborhood of possible adversarial examples when updating, given the current adversarial example. The choice of this variable introduces a tradeoff between efficiency and optimality. However, the authors do provide a greedy algorithm for estimating the neighborhood size to obtain optimal results, and they empirically show that a hamming distance of 1 between neighbors generally works well. Since this approach utilizes the structure of the ensemble to create adversarial examples, is it robust to changes in the tree structure? For example, a tree may have multiple possible structures that are functionally equivalent; thus, is this method robust to structural changes and provide adversarial examples that ultimately improve robustness of a tree ensemble trained on a given dataset? In the same vein, have the authors performed any experiments applying their approach to building robust tree ensembles?
- Information Technology > Security & Privacy (0.40)
- Government > Military (0.40)
Efficient Adversarial Attacks on Online Multi-agent Reinforcement Learning
Due to the broad range of applications of multi-agent reinforcement learning (MARL), understanding the effects of adversarial attacks against MARL model is essential for the safe applications of this model. Motivated by this, we investigate the impact of adversarial attacks on MARL. In the considered setup, there is an exogenous attacker who is able to modify the rewards before the agents receive them or manipulate the actions before the environment receives them. The attacker aims to guide each agent into a target policy or maximize the cumulative rewards under some specific reward function chosen by the attacker, while minimizing the amount of the manipulation on feedback and action. We first show the limitations of the action poisoning only attacks and the reward poisoning only attacks.
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
An Efficient Adversarial Attack for Tree Ensembles
We study the problem of efficient adversarial attacks on tree based ensembles such as gradient boosting decision trees (GBDTs) and random forests (RFs). Since these models are non-continuous step functions and gradient does not exist, most existing efficient adversarial attacks are not applicable. Although decision-based black-box attacks can be applied, they cannot utilize the special structure of trees. In our work, we transform the attack problem into a discrete search problem specially designed for tree ensembles, where the goal is to find a valid leaf tuple'' that leads to mis-classification while having the shortest distance to the original input. With this formulation, we show that a simple yet effective greedy algorithm can be applied to iteratively optimize the adversarial example by moving the leaf tuple to its neighborhood within hamming distance 1. Experimental results on several large GBDT and RF models with up to hundreds of trees demonstrate that our method can be thousands of times faster than the previous mixed-integer linear programming (MILP) based approach, while also providing smaller (better) adversarial examples than decision-based black-box attacks on general \ell_p ( p 1, 2, \infty) norm perturbations.
- Information Technology > Security & Privacy (0.89)
- Government > Military (0.89)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (0.63)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Mathematical & Statistical Methods (0.63)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.63)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.63)
Robustness Tests of NLP Machine Learning Models: Search and Semantically Replace
Singh, Rahul, Jindal, Karan, Yu, Yufei, Yang, Hanyu, Joshi, Tarun, Campbell, Matthew A., Shoumaker, Wayne B.
This paper proposes a strategy to assess the robustness of different machine learning models that involve natural language processing (NLP). The overall approach relies upon a Search and Semantically Replace strategy that consists of two steps: (1) Search, which identifies important parts in the text; (2) Semantically Replace, which finds replacements for the important parts, and constrains the replaced tokens with semantically similar words. We introduce different types of Search and Semantically Replace methods designed specifically for particular types of machine learning models. We also investigate the effectiveness of this strategy and provide a general framework to assess a variety of machine learning models. Finally, an empirical comparison is provided of robustness performance among three different model types, each with a different text representation.
- Banking & Finance (1.00)
- Information Technology > Security & Privacy (0.30)
- Government > Military (0.30)