robust training objective
- North America > United States (0.14)
- North America > Canada (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States (0.14)
- North America > Canada (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Middle East > Jordan (0.04)
Review for NeurIPS paper: HYDRA: Pruning Adversarially Robust Neural Networks
Weaknesses: - It is not clear Hydra improves on adversarial attacks. It looks like test accuracy (benign) correlates with the adversarial accuracy (see Table:1). This is also observed by authors indirectly L217: Our results confirm that the compressed networks show similar trend as non-compressed nets with these attacks . It looks like as long as models are compressed properly the resulting models seem to be robust similar to dense networks. Therefore it is important to evaluate some SOTA sparse networks on compare them with HYDRA.
Robust Training Objectives Improve Embedding-based Retrieval in Industrial Recommendation Systems
Kolodner, Matthew, Ju, Mingxuan, Fan, Zihao, Zhao, Tong, Ghazizadeh, Elham, Wu, Yan, Shah, Neil, Liu, Yozen
Improving recommendation systems (RS) can greatly enhance the user experience across many domains, such as social media. Many RS utilize embedding-based retrieval (EBR) approaches to retrieve candidates for recommendation. In an EBR system, the embedding quality is key. According to recent literature, self-supervised multitask learning (SSMTL) has showed strong performance on academic benchmarks in embedding learning and resulted in an overall improvement in multiple downstream tasks, demonstrating a larger resilience to the adverse conditions between each downstream task and thereby increased robustness and task generalization ability through the training objective. However, whether or not the success of SSMTL in academia as a robust training objectives translates to large-scale (i.e., over hundreds of million users and interactions in-between) industrial RS still requires verification. Simply adopting academic setups in industrial RS might entail two issues. Firstly, many self-supervised objectives require data augmentations (e.g., embedding masking/corruption) over a large portion of users and items, which is prohibitively expensive in industrial RS. Furthermore, some self-supervised objectives might not align with the recommendation task, which might lead to redundant computational overheads or negative transfer. In light of these two challenges, we evaluate using a robust training objective, specifically SSMTL, through a large-scale friend recommendation system on a social media platform in the tech sector, identifying whether this increase in robustness can work at scale in enhancing retrieval in the production setting. Through online A/B testing with SSMTL-based EBR, we observe statistically significant increases in key metrics in the friend recommendations, with up to 5.45% improvements in new friends made and 1.91% improvements in new friends made with cold-start users.
- Europe > Italy > Apulia > Bari (0.06)
- North America > United States > New York > New York County > New York City (0.05)
- Instructional Material (1.00)
- Research Report > Experimental Study (0.49)
- Research Report > New Finding (0.34)
On Pruning Adversarially Robust Neural Networks
Sehwag, Vikash, Wang, Shiqi, Mittal, Prateek, Jana, Suman
In safety-critical but computationally resource-constrained applications, deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size (often millions of parameters). While the research community has extensively explored the use of robust training and network pruning \emph{independently} to address one of these challenges, we show that integrating existing pruning techniques with multiple types of robust training techniques, including verifiably robust training, leads to poor robust accuracy even though such techniques can preserve high regular accuracy. We further demonstrate that making pruning techniques aware of the robust learning objective can lead to a large improvement in performance. We realize this insight by formulating the pruning objective as an empirical risk minimization problem which is then solved using SGD. We demonstrate the success of the proposed pruning technique across CIFAR-10, SVHN, and ImageNet dataset with four different robust training techniques: iterative adversarial training, randomized smoothing, MixTrain, and CROWN-IBP. Specifically, at 99\% connection pruning ratio, we achieve gains up to 3.2, 10.0, and 17.8 percentage points in robust accuracy under state-of-the-art adversarial attacks for ImageNet, CIFAR-10, and SVHN dataset, respectively. Our code and compressed networks are publicly available at https://github.com/inspire-group/compactness-robustness
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Middle East > Jordan (0.04)
- Government > Military (0.68)
- Information Technology > Security & Privacy (0.54)