Robustness to Adversarial Perturbations in Learning from Incomplete Data

Najafi, Amir, Maeda, Shin-ichi, Koyama, Masanori, Miyato, Takeru

arXiv.org Machine Learning 

Robustness to adversarial perturbations has become an essential feature in the design of modern classifiers --in particular, of deep neural networks. This phenomenon originates from several empirical observations, such as [1] and [2], which show deep networks are vulnerable to adversarial attacks in the input space. So far, plenty of novel methodologies have been introduced to compensate for this shortcoming. Adversarial Training (AT) [3], Virtual AT [4] or Distillation [5] are just examples of some promising methods in this area. The majority of these approaches seek an effective defense against a point-wise adversary, who shifts input data-points toward adversarial directions, in a separate manner. However, as shown by [6], a distributional adversary who can shift the data distribution instead of the input data-points is provably more detrimental to learning. This suggests that one can greatly improve the robustness of a classifier by improving its defense against a distributional adversary rather than a point-wise one. This motivation has led to the development of Distributionally Robust Learning (DRL) [7], which has attracted intensive research interest over the last few years [8, 9, 10, 11]. Despite of all the advancements in supervised or unsupervised DRL, the amount of researches tackling this problem from a semi-supervised angle is slim to none [12].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found