Goto

Collaborating Authors

 non-uniform null 2



AT able of Notation Description k Number of latent dimensions in hidden layer of autoencoder m Number of dimensions of input data n Number of datapoints W

Neural Information Processing Systems

Table 1: Summary of notation used in this manuscript, ordered according to introduction in main text. This can be justified by the following Lemma, Lemma 1. The proof is a simple application of the chain rule and Taylor's theorem. Thus, we need only compute the second derivative of the regularization terms. We proceed to take derivatives.


Regularized linear autoencoders recover the principal components, eventually

Neural Information Processing Systems

While there has been rapid progress in understanding the learning dynamics of neural networks, most such work focuses on the networks' ability to fit input-output relationships. However, many machine learning problems require learning representations with general utility.