Goto

Collaborating Authors

 wt 1


6454dcd80b5373daaa97e53ce32c78a1-Paper-Conference.pdf

Neural Information Processing Systems

Wepropose twoinnovativealgorithms, DP-GLMtron and DP-TAGLMtron, that outperform the conventional DPSGD. Inlight ofthevast quantities of personal and sensitiveinformation involved, traditional methods of ensuring privacy are encountering significant challenges.


Generalization Bounds for Neural Networks via Approximate Description Length

Amit Daniely, Elad Granot

Neural Information Processing Systems

Namely,thattheempirical lossofall the functions in the class is -close to the true loss. Finally, we develop a set of tools for calculating the approximate description length of classes of functions that can be presented as a composition of linear function classes and non-linear functions.


OnlineConvexOptimization withContinuousSwitchingConstraint

Neural Information Processing Systems

In many sequential decision making applications, the change of decision would bring an additional cost, such as the wear-and-tear cost associated with changing server status. To control the switching cost, we introduce the problem of online convex optimization with continuous switching constraint, where the goal is to achieve a small regret given a budget on the overall switching cost. We first investigate the hardness of the problem, and provide a lower bound of orderΩ( T)whentheswitchingcostbudgetS = Ω( T),andΩ(min{T/S,T}) whenS = O( T), where T is the time horizon. The essential idea is to carefully design an adaptive adversary, who can adjust the loss function according to thecumulative switchingcostofthe playerincurredso farbasedonthe orthogonal technique. We then develop a simple gradient-based algorithm which enjoys the minimax optimal regret bound.



sup

Neural Information Processing Systems

In the deterministic setting where the data is deterministically given without any probabilistic assumptions, significant advances inDP linear regression has been made [77,57,68, 16, 7, 83, 31, 67, 82, 71]. In the randomized settings where each example{xi,yi} is drawn i.i.d. We explain the closely related ones in Section 2.3, with analysis when the covariance matrixhasaspectralgap. The resulting utility guarantees are the same as those from [23], which are discussedinSection2.3. When privacy is not required, we know from Theorem 2.2 that under Assumptions A.1-A.3, we can achieve an error rate of O(κ p V/n).


Appendices ABernoulli-CRSProperties

Neural Information Processing Systems

Let us defineK Rn n a random diagonal sampling matrix whereKj,j Bernoulli(pj) for 1 j n. Therefore, Bernoulli-CRS will perform on average the same amount of computations as in the fixed-rankCRS. This formulation immediately hints atthe possibility tosample over the input channeldimension, similarly to sampling column-row pairs in matrices. Let ` be a β-Lipschitz loss function, and let the network be trained with SGD using properly decreasing learning rate. Let us denote the weight, bias and activation gradients with respect to a loss function` by Wl, bl, al respectively.



cb77649f5d53798edfa0ff40dae46322-Supplemental.pdf

Neural Information Processing Systems

Optimization is akeycomponent for training machine learning models and has a strong impact on their generalization. In this paper, we consider a particular optimization method--the stochastic gradient Langevin dynamics (SGLD) algorithm--and investigate the generalization of models trained by SGLD.


cb77649f5d53798edfa0ff40dae46322-Paper.pdf

Neural Information Processing Systems

Optimization is akeycomponent for training machine learning models and has a strong impact on their generalization. In this paper, we consider a particular optimization method--the stochastic gradient Langevin dynamics (SGLD) algorithm--and investigate the generalization of models trained by SGLD.