Approximate Newton-based statistical inference using only stochastic gradients
Li, Tianyang, Kyrillidis, Anastasios, Liu, Liu, Caramanis, Constantine
We present a novel inference framework for convex empirical risk minimization, using approximate stochastic Newton steps. The proposed algorithm is based on the notion of finite differences and allows the approximation of a Hessian-vector product from first-order information. In theory, our method efficiently computes the statistical error covariance in $M$-estimation, both for unregularized convex learning problems and high-dimensional LASSO regression, without using exact second order information, or resampling the entire data set. In practice, we demonstrate the effectiveness of our framework on large-scale machine learning problems, that go even beyond convexity: as a highlight, our work can be used to detect certain adversarial attacks on neural networks.
May-22-2018
- Country:
- North America > United States
- California (0.14)
- Texas (0.14)
- North America > United States
- Genre:
- Research Report > Experimental Study (0.46)
- Industry:
- Education > Focused Education
- Special Education (0.44)
- Government > Military (0.34)
- Information Technology > Security & Privacy (0.34)
- Education > Focused Education