Appendices A Dynamic weight sharing

Neural Information Processing Systems 

A.1 Noiseless case Each neuron receives the same k-dimensional input x, and its response z This is a strongly convex function, and therefore it has a unique minimum. From Eq. (19) it is clear that w Figure 5: Logarithm of inverse signal-to-noise ratio (mean weight squared over weight variance, see Eq. (6)) for weight sharing objectives in a layer with 100 neurons. B. Dynamics of weight update that uses Eq. (8b) for = 10, different kernel sizes k and. In each iteration, the input is presented for 150 ms. A.2 Biased noiseless case, and its correspondence to the realistic implementation The realistic implementation of dynamic weight sharing with an inhibitory neuron (Section 4.2) introduces a bias in the update rule: Eq. (13) becomes 0 1 X As a result, the final weights are approximately the same among neurons, but have a small norm due to the scaling.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found