Rezaeifar, Shideh
$\rho$-VAE: Autoregressive parametrization of the VAE encoder
Ferdowsi, Sohrab, Diephuis, Maurits, Rezaeifar, Shideh, Voloshynovskiy, Slava
We make a minimal, but very effective alteration to the VAE model. This is about a drop-in replacement for the (sample-dependent) approximate posterior to change it from the standard white Gaussian with diagonal covariance to the first-order autoregressive Gaussian. We argue that this is a more reasonable choice to adopt for natural signals like images, as it does not force the existing correlation in the data to disappear in the posterior. Moreover, it allows more freedom for the approximate posterior to match the true posterior. This allows for the repararametrization trick, as well as the KL-divergence term to still have closed-form expressions, obviating the need for its sample-based estimation. Although providing more freedom to adapt to correlated distributions, our parametrization has even less number of parameters than the diagonal covariance, as it requires only two scalars, $\rho$ and $s$, to characterize correlation and scaling, respectively. As validated by the experiments, our proposition noticeably and consistently improves the quality of image generation in a plug-and-play manner, needing no further parameter tuning, and across all setups. The code to reproduce our experiments is available at \url{https://github.com/sssohrab/rho_VAE/}.
Robustification of deep net classifiers by key based diversified aggregation with pre-filtering
Taran, Olga, Rezaeifar, Shideh, Holotyak, Taras, Voloshynovskiy, Slava
In this paper, we address a problem of machine learning system vulnerability to adversarial attacks. We propose and investigate a Key based Diversified Aggregation (KDA) mechanism as a defense strategy. The KDA assumes that the attacker (i) knows the architecture of classifier and the used defense strategy, (ii) has an access to the training data set but (iii) does not know the secret key. The robustness of the system is achieved by a specially designed key based randomization. The proposed randomization prevents the gradients' back propagation or the creating of a "bypass" system. The randomization is performed simultaneously in several channels and a multi-channel aggregation stabilizes the results of randomization by aggregating soft outputs from each classifier in multi-channel system. The performed experimental evaluation demonstrates a high robustness and universality of the KDA against the most efficient gradient based attacks like those proposed by N. Carlini and D. Wagner and the non-gradient based sparse adversarial perturbations like OnePixel attacks.
Reconstruction of Privacy-Sensitive Data from Protected Templates
Rezaeifar, Shideh, Razeghi, Behrooz, Taran, Olga, Holotyak, Taras, Voloshynovskiy, Slava
ABSTRACT The problem of template privacy protection in view of reconstruction attacks is well recognized and various generic In this paper, we address the problem of data reconstruction methods were proposed such as fuzzy commitment schemes from privacy-protected templates, based on recent [3] and secure sketches [4], helper data based methods [5, concept of sparse ternary coding with ambiguization (STCA). We do not pretend to be exhaustive in our overview addition of ambiguization noise to satisfy the privacy-utility and refer interesting readers to [8]. The theoretical privacy-preserving the STCA was proposed that combines and extends the encoding properties of STCA have been validated on synthetic data. The STCA ensures the protection threats linked to reconstruction based on recent deep of both templates and queries in authentication and identification reconstruction algorithms are still open problems. Our results systems against the adversarial reconstruction and demonstrate that STCA still achieves the claimed theoretical clustering [9, 10].
Defending against adversarial attacks by randomized diversification
Taran, Olga, Rezaeifar, Shideh, Holotyak, Taras, Voloshynovskiy, Slava
The vulnerability of machine learning systems to adversarial attacks questions their usage in many applications. In this paper, we propose a randomized diversification as a defense strategy. We introduce a multi-channel architecture in a gray-box scenario, which assumes that the architecture of the classifier and the training data set are known to the attacker. The attacker does not only have access to a secret key and to the internal states of the system at the test time. The defender processes an input in multiple channels. Each channel introduces its own randomization in a special transform domain based on a secret key shared between the training and testing stages. Such a transform based randomization with a shared key preserves the gradients in key-defined sub-spaces for the defender but it prevents gradient back propagation and the creation of various bypass systems for the attacker. An additional benefit of multi-channel randomization is the aggregation that fuses soft-outputs from all channels, thus increasing the reliability of the final score. The sharing of a secret key creates an information advantage to the defender. Experimental evaluation demonstrates an increased robustness of the proposed method to a number of known state-of-the-art attacks.
Bridging machine learning and cryptography in defence against adversarial attacks
Taran, Olga, Rezaeifar, Shideh, Voloshynovskiy, Slava
In the last decade, deep learning algorithms have become very popular thanks to the achieved performance in many machine learning and computer vision tasks. However, most of the deep learning architectures are vulnerable to so called adversarial examples. This questions the security of deep neural networks (DNN) for many security- and trust-sensitive domains. The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function.Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence. In this work, we propose a new defence mechanism based on the second Kerckhoffs's cryptographic principle which states that the defence and classification algorithm are supposed to be known, but not the key. To be compliant with the assumption that the attacker does not have access to the secret key, we will primarily focus on a gray-box scenario and do not address a white-box one. More particularly, we assume that the attacker does not have direct access to the secret block, but (a) he completely knows the system architecture, (b) he has access to the data used for training and testing and (c) he can observe the output of the classifier for each given input. We show empirically that our system is efficient against most famous state-of-the-art attacks in black-box and gray-box scenarios.