Appendix Table of Contents

Neural Information Processing Systems 

Salem et al., 2018, Y eom et al., 2018, Song and Mittal, 2021] the adversary determines whether a E.g., if the inputs are images, then the adversary must be able to guess a Chen et al., 2021] the adversary aims to steal the trained model functionality. In this attack, the adversary only has black-box access with no prior knowledge of the model parameters or training data, and the outcome of the attack is a model that is approximately the same as the target model. Model-inversion attacks [Fredrikson et al., 2015] are perhaps the closest to our Fredrikson et al. [2015] showed that a face-recognition model can be used to reconstruct images of a certain person. This is done by using gradient descent for obtaining an input that maximizes the output probability that the face-recognition model assigns to a specific class. In Zhang et al. [2020], the authors leverage partial public information to learn That is, they generate images where the target model outputs a high probability for the considered class (as in Fredrikson et al. [2015]), but also encourage realistic images using GAN.

Duplicate Docs Excel Report

Similar Docs  Excel Report  more

TitleSimilaritySource
None found