Learning Continuous Attractors in Recurrent Networks
–Neural Information Processing Systems
One approach to invariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in missing information, the network develops a continuous attractor that models the manifold from which the patterns are drawn.
Neural Information Processing Systems
Dec-31-1998
- Country:
- North America > United States (0.15)
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Technology: