Review for NeurIPS paper: An Efficient Adversarial Attack for Tree Ensembles

Neural Information Processing Systems 

Weaknesses: The biggest variable to this approach seems to be the size of the neighborhood of possible adversarial examples when updating, given the current adversarial example. The choice of this variable introduces a tradeoff between efficiency and optimality. However, the authors do provide a greedy algorithm for estimating the neighborhood size to obtain optimal results, and they empirically show that a hamming distance of 1 between neighbors generally works well. Since this approach utilizes the structure of the ensemble to create adversarial examples, is it robust to changes in the tree structure? For example, a tree may have multiple possible structures that are functionally equivalent; thus, is this method robust to structural changes and provide adversarial examples that ultimately improve robustness of a tree ensemble trained on a given dataset? In the same vein, have the authors performed any experiments applying their approach to building robust tree ensembles?