Goto

Collaborating Authors

 fout




74dbd1111727a31a2b825d615d80b2e7-Supplemental.pdf

Neural Information Processing Systems

Recent empirical successes in large-scale machine learning have been powered by massive data parallelism and hardware acceleration, with batch sizes trending beyond 10K+ images [46] or 1M+ tokens [9]. Numerous interdisciplinarysources [5,12,24,33]indicate that the performance bottlenecks of contemporary deep learning pipelines can lie in many places other than gradient computation.



HiPPO-Prophecy: State-Space Models can Provably Learn Dynamical Systems in Context

Joseph, Federico Arangath, Haefeli, Kilian, Liniger, Noah, Gulcehre, Caglar

arXiv.org Machine Learning

This work explores the in-context learning capabilities of State Space Models (SSMs) and presents, to the best of our knowledge, the first theoretical explanation of a possible underlying mechanism. We introduce a novel weight construction for SSMs, enabling them to predict the next state of any dynamical system after observing previous states without parameter fine-tuning. This is accomplished by extending the HiPPO framework to demonstrate that continuous SSMs can approximate the derivative of any input signal. Specifically, we find an explicit weight construction for continuous SSMs and provide an asymptotic error bound on the derivative approximation. The discretization of this continuous SSM subsequently yields a discrete SSM that predicts the next state. Finally, we demonstrate the effectiveness of our parameterization empirically. This work should be an initial step toward understanding how sequence models based on SSMs learn in context.