Appendix A More examples of optimality criteria and fixed points

Neural Information Processing Systems 

F or fixed point iteration T . Newton's method for root-finding is T (x,θ) = x η [ Newton's method for optimization is obtained by choosing G (x,θ) = G( x,θ) is positive semi-definite. Proximal block coordinate descent fixed point. Clearly, when the step sizes are shared, i.e., We now show how to use the KKT conditions discussed in 2.2 to With our framework, no derivation is needed. However, since this LMO is piecewise constant, its Jacobian is null almost everywhere.