One time is not enough: iterative tensor decomposition for neural network compression

Gusak, Julia, Kholyavchenko, Maksym, Ponomarev, Evgeny, Markeeva, Larisa, Oseledets, Ivan, Cichocki, Andrzej Machine Learning 

The low-rank tensor approximation is very promising for the compression of deep neural networks. We propose a new simple and efficient iterative approach, which alternates low-rank factorization with a smart rank selection and fine-tuning. We demonstrate the efficiency of our method comparing to non-iterative ones. Our approach improves the compression rate while maintaining the accuracy for a variety of tasks.