Considerations on the Theory of Training Models with Differential Privacy
van Dijk, Marten, Nguyen, Phuong Ha
–arXiv.org Artificial Intelligence
Privacy leakage is a big problem in the big-data era. Solving a learning task based on big data intrinsically means that only through a collaborative effort sufficient data is available for training a global model with sufficient clean accuracy (utility). Federated learning is a framework where a learning task is solved by a loose federation of participating devices/clients which are coordinated by a central server [42, 8, 3, 33, 40, 9, 57, 29, 59, 10, 34, 36, 37, 39, 12, 30]. Clients, who use own local data to participate in a learning task by training a global model, want to have privacy guarantees for their local proprietary data. For this reason DP-SGD [1] was introduced as it adapts distributed Stochastic Gradient Descent (SGD)[55] with Differential Privacy (DP)[19, 15, 21, 18].
arXiv.org Artificial Intelligence
Jul-16-2023
- Country:
- North America > United States > Nevada (0.14)
- Genre:
- Research Report (0.64)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: