Goto

Collaborating Authors

 covtype



A and Model Statistics

Neural Information Processing Systems

We use 9 datasets and pre-trained models provided in Chen et al. (2019b), which can be downloaded Methods on the bottom-left corner are better. For completeness we include verification results (Chen et al., 2019b; Wang et al., 2020) in


ba3e9b6a519cfddc560b5d53210df1bd-AuthorFeedback.pdf

Neural Information Processing Systems

We have 2 large datasets, HIGGS and Bosch (see reply to[R3]-1)). Table B highlights our differences.3) Motivation: We provide a strong attack as a tool for evaluating the9 robustnessoftreebasedmodels. MILP uses a thin wrapper around the Gurobi Solver.


Appendix

Neural Information Processing Systems

Details regarding the datasets used in the experiments are included in Table 2. For Yang et al. [2020], we progressively doubled the number of regions searched which is the only adjustable hyperparameter. To make this figure, we run all the experiments (all attacks, datasets, and choices of hyperparameters)onaserverwith40coresofIntel(R)Xeon(R)Gold6230CPU@2.10GHz. This outcome is seemingly perplexing than the previous one. We explain it for different values ofm, namely the small-mandthelarge-mregions.


A and Model Statistics

Neural Information Processing Systems

We use 9 datasets and pre-trained models provided in Chen et al. (2019b), which can be downloaded Methods on the bottom-left corner are better. For completeness we include verification results (Chen et al., 2019b; Wang et al., 2020) in


In Table A, we repeat our experiments on 5000 test examples for each dataset (or the

Neural Information Processing Systems

We thank all reviewers for their valuable comments and suggestions. Table B highlights our differences. Methods on bottom-left corner are better. We will enlarge figures and explain more. In Table 2 and 3, HIGGS contains 10.5 million training examples and the ensemble We additionally added Bosch (1.2 million examples, 968 features) in Table A. Both datasets are from Our method is effective on both datasets.



Trading Computation for Communication: Distributed Stochastic Dual Coordinate Ascent

Neural Information Processing Systems

We present and study a distributed optimization algorithm by employing a stochastic dual coordinate ascent method. Stochastic dual coordinate ascent methods enjoy strong theoretical guarantees and often have better performances than stochastic gradient descent methods in optimizing regularized loss minimization problems. It still lacks of efforts in studying them in a distributed framework. We make a progress along the line by presenting a distributed stochastic dual coordinate ascent algorithm in a star network, with an analysis of the tradeoff between computation and communication. We verify our analysis by experiments on real data sets. Moreover, we compare the proposed algorithm with distributed stochastic gradient descent methods and distributed alternating direction methods of multipliers for optimizing SVMs in the same distributed framework, and observe competitive performances.


Data Selection: A Surprisingly Effective and General Principle for Building Small Interpretable Models

Ghose, Abhishek

arXiv.org Artificial Intelligence

We present convincing empirical evidence for an effective and general strategy for building accurate small models. Such models are attractive for interpretability and also find use in resource-constrained environments. The strategy is to learn the training distribution instead of using data from the test distribution. The distribution learning algorithm is not a contribution of this work; we highlight the broad usefulness of this simple strategy on a diverse set of tasks, and as such these rigorous empirical results are our contribution. We apply it to the tasks of (1) building cluster explanation trees, (2) prototype-based classification, and (3) classification using Random Forests, and show that it improves the accuracy of weak traditional baselines to the point that they are surprisingly competitive with specialized modern techniques. This strategy is also versatile wrt the notion of model size. In the first two tasks, model size is identified by number of leaves in the tree and the number of prototypes respectively. In the final task involving Random Forests the strategy is shown to be effective even when model size is determined by more than one factor: number of trees and their maximum depth. Positive results using multiple datasets are presented that are shown to be statistically significant. These lead us to conclude that this strategy is both effective, i.e, leads to significant improvements, and general, i.e., is applicable to different tasks and model families, and therefore merits further attention in domains that require small accurate models.


Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization Techniques

Wang, Bokun, Safaryan, Mher, Richtárik, Peter

arXiv.org Artificial Intelligence

To address the high communication costs of distributed machine learning, a large body of work has been devoted in recent years to designing various compression strategies, such as sparsification and quantization, and optimization algorithms capable of using them. Recently, Safaryan et al. (2021) pioneered a dramatically different compression design approach: they first use the local training data to form local smoothness matrices and then propose to design a compressor capable of exploiting the smoothness information contained therein. While this novel approach leads to substantial savings in communication, it is limited to sparsification as it crucially depends on the linearity of the compression operator. In this work, we generalize their smoothness-aware compression strategy to arbitrary unbiased compression operators, which also include sparsification. Specializing our results to stochastic quantization, we guarantee significant savings in communication complexity compared to standard quantization. In particular, we prove that block quantization with $n$ blocks theoretically outperforms single block quantization, leading to a reduction in communication complexity by an $\mathcal{O}(n)$ factor, where $n$ is the number of nodes in the distributed system. Finally, we provide extensive numerical evidence with convex optimization problems that our smoothness-aware quantization strategies outperform existing quantization schemes as well as the aforementioned smoothness-aware sparsification strategies with respect to three evaluation metrics: the number of iterations, the total amount of bits communicated, and wall-clock time.