Goto

Collaborating Authors

 decision stump and tree


Provably robust boosted decision stumps and trees against adversarial attacks

Neural Information Processing Systems

The problem of adversarial robustness has been studied extensively for neural networks. However, for boosted decision trees and decision stumps there are almost no results, even though they are widely used in practice (e.g.


Reviews: Provably robust boosted decision stumps and trees against adversarial attacks

Neural Information Processing Systems

As a main contribution, the authors derive an exact attack algorithm on ensembles of decision stumps for \ell_\infty perturbations. In contrast, this problem is known to be NP-Hard for trees with at least 3 internal nodes, via previous work by Kantchelian et.


Reviews: Provably robust boosted decision stumps and trees against adversarial attacks

Neural Information Processing Systems

Thank you for your submission to NeurIPS. After the author response and discussion, the reviewers and I are in agreement that this work presents an interesting and substantial contribution to the work on provably robust adversarial learning. The extension of such methods from the typical NN setting to one of boosted decision stumps is an interesting one, and certainly worthy of publication. The author response in particular was good at addressing the points of one of the initially most negative reviewer, and it would be good to include these points into the final version.


Provably robust boosted decision stumps and trees against adversarial attacks

Neural Information Processing Systems

The problem of adversarial robustness has been studied extensively for neural networks. However, for boosted decision trees and decision stumps there are almost no results, even though they are widely used in practice (e.g. We show in this paper that for boosted decision stumps the \textit{exact} min-max robust loss and test error for an l_\infty -attack can be computed in O(T\log T) time per input, where T is the number of decision stumps and the optimal update step of the ensemble can be done in O(n 2\,T\log T), where n is the number of data points. For boosted trees we show how to efficiently calculate and optimize an upper bound on the robust loss, which leads to state-of-the-art robust test error for boosted trees on MNIST (12.5\% for \epsilon_\infty 0.3), FMNIST (23.2\% for \epsilon_\infty 0.1), and CIFAR-10 (74.7\% for \epsilon_\infty 8/255). Moreover, the robust test error rates we achieve are competitive to the ones of provably robust convolutional networks.


Provably robust boosted decision stumps and trees against adversarial attacks

Andriushchenko, Maksym, Hein, Matthias

Neural Information Processing Systems

The problem of adversarial robustness has been studied extensively for neural networks. However, for boosted decision trees and decision stumps there are almost no results, even though they are widely used in practice (e.g. We show in this paper that for boosted decision stumps the \textit{exact} min-max robust loss and test error for an $l_\infty$-attack can be computed in $O(T\log T)$ time per input, where $T$ is the number of decision stumps and the optimal update step of the ensemble can be done in $O(n 2\,T\log T)$, where $n$ is the number of data points. For boosted trees we show how to efficiently calculate and optimize an upper bound on the robust loss, which leads to state-of-the-art robust test error for boosted trees on MNIST (12.5\% for $\epsilon_\infty 0.3$), FMNIST (23.2\% for $\epsilon_\infty 0.1$), and CIFAR-10 (74.7\% for $\epsilon_\infty 8/255$). Moreover, the robust test error rates we achieve are competitive to the ones of provably robust convolutional networks. Papers published at the Neural Information Processing Systems Conference.