GEMINI: Gradient Estimation Through Matrix Inversion After Noise Injection

Cun, Yann Le, Galland, Conrad C., Hinton, Geoffrey E.

Neural Information Processing Systems 

Learning procedures that measure how random perturbations of unit activities correlatewith changes in reinforcement are inefficient but simple to implement in hardware. Procedures like back-propagation (Rumelhart, Hinton and Williams, 1986) which compute how changes in activities affect theoutput error are much more efficient, but require more complex hardware. GEMINI is a hybrid procedure for multilayer networks, which shares many of the implementation advantages of correlational reinforcement proceduresbut is more efficient. GEMINI injects noise only at the first hidden layer and measures the resultant effect on the output error. A linear network associated with each hidden layer iteratively inverts the matrix which relates the noise to the error change, thereby obtaining the error-derivatives. No back-propagation is involved, thus allowing unknown non-linearitiesin the system. Two simulations demonstrate the effectiveness of GEMINI.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found