Near Optimal Private and Robust Linear Regression
Liu, Xiyang, Jain, Prateek, Kong, Weihao, Oh, Sewoong, Suggala, Arun Sai
–arXiv.org Artificial Intelligence
We study the canonical statistical estimation problem of linear regression from $n$ i.i.d.~examples under $(\varepsilon,\delta)$-differential privacy when some response variables are adversarially corrupted. We propose a variant of the popular differentially private stochastic gradient descent (DP-SGD) algorithm with two innovations: a full-batch gradient descent to improve sample complexity and a novel adaptive clipping to guarantee robustness. When there is no adversarial corruption, this algorithm improves upon the existing state-of-the-art approach and achieves a near optimal sample complexity. Under label-corruption, this is the first efficient linear regression algorithm to guarantee both $(\varepsilon,\delta)$-DP and robustness. Synthetic experiments confirm the superiority of our approach.
arXiv.org Artificial Intelligence
Jan-30-2023
- Country:
- North America > United States (0.45)
- Genre:
- Research Report
- Experimental Study (0.45)
- New Finding (0.67)
- Promising Solution (0.48)
- Research Report
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: