Phase-Space Learning

Tsung, Fu-Sheng, Cottrell, Garrison W.

Neural Information Processing Systems 

Existing recurrent net learning algorithms are inadequate. We introduce theconceptual framework of viewing recurrent training as matching vector fields of dynamical systems in phase space. Phasespace reconstructiontechniques make the hidden states explicit, reducing temporal learning to a feed-forward problem. In short, we propose viewing iterated prediction [LF88] as the best way of training recurrent networks on deterministic signals. Using this framework, we can train multiple trajectories, insure their stability, anddesign arbitrary dynamical systems. 1 INTRODUCTION Existing general-purpose recurrent algorithms are capable of rich dynamical behavior. Unfortunately,straightforward applications of these algorithms to training fully-recurrent networks on complex temporal tasks have had much less success than their feedforward counterparts. For example, to train a recurrent network to oscillate like a sine wave (the "hydrogen atom" of recurrent learning), existing techniques such as Real Time Recurrent Learning (RTRL) [WZ89] perform suboptimally. Williams& Zipser trained a two-unit network with RTRL, with one teacher signal. One unit of the resulting network showed a distorted waveform, the other only half the desired amplitude.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found