Impression learning Online representation learning with synaptic plasticity Appendices

Neural Information Processing Systems 

Our derivation of the update for IL (Eq. 3) is based on an expansion of log We examine the consequences of this bias formula for our specific model. Note that the update term in Eq. (S1) is However, we will show in Appendix C that these updates may have high variance. 'reparameterization trick,' in which a change of variables allows the use of stochastic gradient descent It is worth noting that this'reparameterization' will work only for additive Gaussian noise. As already mentioned, WS can be viewed as a special case of IL. Since WS is a special case of IL, the bias properties of its individual samples are identical.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found