[R][1610.09027] Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes [DeepMind] • /r/MachineLearning

@machinelearnbot 

I use episodic memory so there is no write head. The idea is that instead of determining what you want to store and where to store it, you store everything in one summary state. The summary state is written in memory at every time step. The problem is then to learn to retrieve a previous summary state that helps with the current computation. At every time step, the network generates a retrieval key and mask for one state retrieval.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found