Attack-A ware Noise Calibration for Differential Privacy

Neural Information Processing Systems 

Differential privacy (DP) is a widely used approach for mitigating privacy risks when training machine learning models on sensitive data. DP mechanisms add noise during training to limit the risk of information leakage. The scale of the added noise is critical, as it determines the trade-off between privacy and utility.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found