Surfing: Iterative Optimization Over Incrementally Trained Deep Networks

Song, Ganlin, Fan, Zhou, Lafferty, John

Neural Information Processing Systems 

We investigate a sequential optimization procedure to minimize the empirical risk functional $f_{\hat\theta}(x) \frac{1}{2}\ G_{\hat\theta}(x) - y\ 2$ for certain families of deep networks $G_{\theta}(x)$. The approach is to optimize a sequence of objective functions that use network parameters obtained during different stages of the training process. When initialized with random parameters $\theta_0$, we show that the objective $f_{\theta_0}(x)$ is nice'' and easy to optimize with gradient descent. As learning is carried out, we obtain a sequence of generative networks $x \mapsto G_{\theta_t}(x)$ and associated risk functions $f_{\theta_t}(x)$, where $t$ indicates a stage of stochastic gradient descent during training. Since the parameters of the network do not change by very much in each step, the surface evolves slowly and can be incrementally optimized.