Trade-offs and Guarantees of Adversarial Representation Learning for Information Obfuscation
–Neural Information Processing Systems
Crowdsourced data used in machine learning services might carry sensitive information about attributes that users do not want to share. Various methods have been proposed to minimize the potential information leakage of sensitive attributes while maximizing the task accuracy. However, little is known about the theory behind these methods. In light of this gap, we develop a novel theoretical framework for attribute obfuscation. Under our framework, we propose a minimax optimization formulation to protect the given attribute and analyze its inference guarantees against worst-case adversaries.
Neural Information Processing Systems
Oct-10-2024, 11:21:09 GMT
- Technology: