recurrent connection
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Austria > Vienna (0.04)
- North America > United States > Pennsylvania (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Middle East > Qatar > Ad-Dawhah > Doha (0.04)
- North America > United States (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Beijing > Beijing (0.04)
- North America > United States (0.46)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This submission describes a novel autoencoder method, that uses unsupervised learning to configure a recurrent network to encode both the current and past states of an input. I am not a mathematician nor machine learning expert, and thus am not qualified to review the work for technical merit. However, I have extensive experience in neural network modeling, and thus appreciate both the objective and purported accomplishments: the ability to train a recurrent network to store input sequences in an efficient manner using non-supervised learning. The authors describe a mechanism that addresses the problem by breaking it into two stages -- autoencoding, and then optimization -- that are carried out over different times scales.
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Summary: This paper deals with sampling methods based on linear rate-based neural-networks. First, it shows that symmetric weights (a common constraint in many models) significantly hurt the mixing rate. Then it shows that a (more physiological) non-normal network can have a much faster mixing rate, if the connectivity is optimized for this purpose. This works even if more biological constraints (Dale's law) are imposed.