Goto

Collaborating Authors

 surrogate


Quantifying Learning Guarantees for Convex but Inconsistent Surrogates

Neural Information Processing Systems

We study consistency properties of machine learning methods based on minimizing convex surrogates. We extend the recent framework of Osokin et al. (2017) for the quantitative analysis of consistency properties to the case of inconsistent surrogates. Our key technical contribution consists in a new lower bound on the calibration function for the quadratic surrogate, which is non-trivial (not always zero) for inconsistent cases. The new bound allows to quantify the level of inconsistency of the setting and shows how learning with inconsistent surrogates can have guarantees on sample complexity and optimization difficulty. We apply our theory to two concrete cases: multi-class classification with the tree-structured loss and ranking with the mean average precision loss. The results show the approximation-computation trade-offs caused by inconsistent surrogates and their potential benefits.


Scalable Robust Matrix Factorization with Nonconvex Loss

Neural Information Processing Systems

Robust matrix factorization (RMF), which uses the $\ell_1$-loss, often outperforms standard matrix factorization using the $\ell_2$-loss, particularly when outliers are present. The state-of-the-art RMF solver is the RMF-MM algorithm, which, however, cannot utilize data sparsity. Moreover, sometimes even the (convex) $\ell_1$-loss is not robust enough. In this paper, we propose the use of nonconvex loss to enhance robustness. To address the resultant difficult optimization problem, we use majorization-minimization (MM) optimization and propose a new MM surrogate. To improve scalability, we exploit data sparsity and optimize the surrogate via its dual with the accelerated proximal gradient algorithm. The resultant algorithm has low time and space complexities and is guaranteed to converge to a critical point. Extensive experiments demonstrate its superiority over the state-of-the-art in terms of both accuracy and scalability.



A Broader Impact

Neural Information Processing Systems

Our work designs privacy attacks, which have the potential to cause harm. The main limitation of our work is the strong threat model under which our attacks work. All of our results on CIFAR-10 make use of fewer than 30000 trained models. We plot the effectiveness of Transfer LiRA in Figure 7. ROC curves for our student attacks are found Further qualitative examples can be found in Figure 9. Ablation of score information CIFAR-10 with duplicates are found in Figure 11. Distillation threat models, which we will consider simultaneously.