Appendices A Dynamic weight sharing

Neural Information Processing Systems 

A.1 Noiseless case Each neuron receives the same k -dimensional input x, and its response z From Eq. (19) it is clear that Dynamics of weight update that uses Eq. (8b) In each iteration, the input is presented for 150 ms. Realistically, all neurons can't see the same Let's also bound the input mean and noise as E Therefore, we can bound the full gradient by the sum of individual bounds (as it's the Frobenius Both plots in Figure 1 show mean negative log SNR over 10 runs, 100 output neurons each. Learning was performed via SGD with momentum of 0.95. The minimum SNR value was computed from Eq. (5). For our data, the SNR expression in Eq. (6) has The code for both runs is provided in the supplementary material.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found