Parnichkun, Rom N.
State-Free Inference of State-Space Models: The Transfer Function Approach
Parnichkun, Rom N., Massaroli, Stefano, Moro, Alessandro, Smith, Jimmy T. H., Hasani, Ramin, Lechner, Mathias, An, Qi, Ré, Christopher, Asama, Hajime, Ermon, Stefano, Suzuki, Taiji, Yamashita, Atsushi, Poli, Michael
We approach designing a state-space model for deep learning applications through its dual representation, the transfer function, and uncover a highly efficient sequence parallel inference algorithm that is state-free: unlike other proposed algorithms, state-free inference does not incur any significant memory or computational cost with an increase in state size. We achieve this using properties of the proposed frequency domain transfer function parametrization, which enables direct computation of its corresponding convolutional kernel's spectrum via a single Fast Fourier Transform. Our experimental results across multiple sequence lengths and state sizes illustrates, on average, a 35% training speed improvement over S4 layers -- parametrized in time-domain -- on the Long Range Arena benchmark, while delivering state-of-the-art downstream performances over other attention-free approaches. Moreover, we report improved perplexity in language modeling over a long convolutional Hyena baseline, by simply introducing our transfer function parametrization. Our code is available at https://github.com/ruke1ire/RTF.
Laughing Hyena Distillery: Extracting Compact Recurrences From Convolutions
Massaroli, Stefano, Poli, Michael, Fu, Daniel Y., Kumbong, Hermann, Parnichkun, Rom N., Timalsina, Aman, Romero, David W., McIntyre, Quinn, Chen, Beidi, Rudra, Atri, Zhang, Ce, Re, Christopher, Ermon, Stefano, Bengio, Yoshua
Attention-free approaches such as long convolution sequence models (LCSMs), e.g., H3 [1], Hyena [2], have shown promise in matching Transformer [3, 4] performance across a wide range of tasks, with sub-quadratic complexity with respect to sequence length. Despite the improved efficiency during training on long sequences, unless the convolution filters are either short or admit a low-dimensional state-state-space realization, LCSMs still need to process the entire growing sequence at every step of auto-regressive generation, similarly to Transformers. In this work, we seek to refine LCSMs in both efficiency and quality. First, we study the inference stage, and propose methods to enable a recurrent mode for auto-regressive generation. Recurrent modes prescribe the existence of a state encoding the past information of the process in a fixed-dimension memory, enabling constant per-step time and constant-memory in generation. Then, we draw upon an analysis of pre-trained models to develop architectural enhancements for the Hyena block, simultaneously improving model quality and efficiency of the distillation procedure. Distilling fast recurrences We introduce LaughingHyena, the first distillation approach for LCSMs that enables recurrent inference without impacting downstream quality. LaughingHyena seeks compact recurrences in the form of state-space models (SSMs) [5, 6] as the solution of a nonlinear interpolation problem involving the convolution filters of a pre-trained model. Since the total memory cost of SSMs grows linearly in the state dimension d, our distillation procedure enables high throughput by enabling processing of large batches during generation.