Goto

Collaborating Authors

GEMINI: Gradient Estimation Through Matrix Inversion After Noise Injection

Neural Information Processing Systems

Learning procedures that measure how random perturbations of unit activities correlatewith changes in reinforcement are inefficient but simple to implement in hardware. Procedures like back-propagation (Rumelhart, Hinton and Williams, 1986) which compute how changes in activities affect theoutput error are much more efficient, but require more complex hardware. GEMINI is a hybrid procedure for multilayer networks, which shares many of the implementation advantages of correlational reinforcement proceduresbut is more efficient. GEMINI injects noise only at the first hidden layer and measures the resultant effect on the output error. A linear network associated with each hidden layer iteratively inverts the matrix which relates the noise to the error change, thereby obtaining the error-derivatives. No back-propagation is involved, thus allowing unknown non-linearitiesin the system. Two simulations demonstrate the effectiveness of GEMINI.


GEMINI: Gradient Estimation Through Matrix Inversion After Noise Injection

Neural Information Processing Systems

Learning procedures that measure how random perturbations of unit activities correlate with changes in reinforcement are inefficient but simple to implement in hardware. Procedures like back-propagation (Rumelhart, Hinton and Williams, 1986) which compute how changes in activities affect the output error are much more efficient, but require more complex hardware. GEMINI is a hybrid procedure for multilayer networks, which shares many of the implementation advantages of correlational reinforcement procedures but is more efficient. GEMINI injects noise only at the first hidden layer and measures the resultant effect on the output error. A linear network associated with each hidden layer iteratively inverts the matrix which relates the noise to the error change, thereby obtaining the error-derivatives. No back-propagation is involved, thus allowing unknown non-linearities in the system. Two simulations demonstrate the effectiveness of GEMINI.


GEMINI: Gradient Estimation Through Matrix Inversion After Noise Injection

Neural Information Processing Systems

Learning procedures that measure how random perturbations of unit activities correlate with changes in reinforcement are inefficient but simple to implement in hardware. Procedures like back-propagation (Rumelhart, Hinton and Williams, 1986) which compute how changes in activities affect the output error are much more efficient, but require more complex hardware. GEMINI is a hybrid procedure for multilayer networks, which shares many of the implementation advantages of correlational reinforcement procedures but is more efficient. GEMINI injects noise only at the first hidden layer and measures the resultant effect on the output error. A linear network associated with each hidden layer iteratively inverts the matrix which relates the noise to the error change, thereby obtaining the error-derivatives. No back-propagation is involved, thus allowing unknown non-linearities in the system. Two simulations demonstrate the effectiveness of GEMINI.


What is Deep Learning -- Part I

#artificialintelligence

When ever we hear the term'Deep Learning' we would have mostly heard about Neural Networks. So, what is neural network? This is what we will discuss in this blog. Let us consider a neuron in our brain like the figure below. In this there are 3 important parts the nuclues, dendrites and axon.


maciejkula/spotlight

#artificialintelligence

Large embedding layers are a performance problem for fitting models: even though the gradients are sparse (only a handful of user and item vectors need parameter updates in every minibatch), PyTorch updates the entire embedding layer at every backward pass. Computation time is then wasted on applying zero gradient steps to whole embedding matrix. To alleviate this problem, we can use a smaller underlying embedding layer, and probabilistically hash users and items into that smaller space. With good hash functions, collisions should be rare, and we should observe fitting speedups without a decrease in accuracy. The implementation in Spotlight follows the RecSys 2017 paper "Getting deep recommenders fit: Bloom embeddings for sparse binary input/output networks.".