Learning Robust and Privacy-Preserving Representations via Information Theory
Zhang, Binghui, Noorbakhsh, Sayedeh Leila, Dong, Yun, Hong, Yuan, Wang, Binghui
–arXiv.org Artificial Intelligence
Machine learning models are vulnerable to both security attacks (e.g., adversarial examples) and privacy attacks (e.g., private attribute inference). We take the first step to mitigate both the security and privacy attacks, and maintain task utility as well. Particularly, we propose an information-theoretic framework to achieve the goals through the lens of representation learning, i.e., learning representations that are robust to both adversarial examples and attribute inference adversaries. We also derive novel theoretical results under our framework, e.g., the inherent trade-off between adversarial robustness/utility and attribute privacy, and guaranteed attribute privacy leakage against attribute inference adversaries.
arXiv.org Artificial Intelligence
Dec-15-2024
- Country:
- North America > United States > California (0.14)
- Genre:
- Research Report (0.64)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Neural Networks (0.69)
- Representation & Reasoning (0.93)
- Vision (1.00)
- Data Science > Data Mining
- Big Data (0.42)
- Security & Privacy (1.00)
- Sensing and Signal Processing > Image Processing (1.00)
- Artificial Intelligence
- Information Technology