Attack-A ware Noise Calibration for Differential Privacy
–Neural Information Processing Systems
Differential privacy (DP) is a widely used approach for mitigating privacy risks when training machine learning models on sensitive data. DP mechanisms add noise during training to limit the risk of information leakage. The scale of the added noise is critical, as it determines the trade-off between privacy and utility.
Neural Information Processing Systems
Oct-10-2025, 21:25:52 GMT
- Country:
- Europe
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- Switzerland > Vaud
- Lausanne (0.04)
- Germany > Bavaria
- South America > Chile
- Europe
- Genre:
- Research Report > Experimental Study (1.00)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.93)
- Performance Analysis > Accuracy (1.00)
- Natural Language (1.00)
- Machine Learning
- Data Science (1.00)
- Security & Privacy (1.00)
- Artificial Intelligence
- Information Technology