Multi-level projection with exponential parallel speedup; Application to sparse auto-encoders neural networks
Perez, Guillaume, Barlaud, Michel
–arXiv.org Artificial Intelligence
The $\ell_{1,\infty}$ norm is an efficient structured projection but the complexity of the best algorithm is unfortunately $\mathcal{O}\big(n m \log(n m)\big)$ for a matrix in $\mathbb{R}^{n\times m}$. In this paper, we propose a new bi-level projection method for which we show that the time complexity for the $\ell_{1,\infty}$ norm is only $\mathcal{O}\big(n m \big)$ for a matrix in $\mathbb{R}^{n\times m}$, and $\mathcal{O}\big(n + m \big)$ with full parallel power. We generalize our method to tensors and we propose a new multi-level projection, having an induced decomposition that yields a linear parallel speedup up to an exponential speedup factor, resulting in a time complexity lower-bounded by the sum of the dimensions, instead of the product of the dimensions. we provide a large base of implementation of our framework for bi-level and tri-level (matrices and tensors) for various norms and provides also the parallel implementation. Experiments show that our projection is $2$ times faster than the actual fastest Euclidean algorithms while providing same accuracy and better sparsity in neural networks applications.
arXiv.org Artificial Intelligence
Jul-4-2024
- Country:
- Europe > France (0.14)
- North America > Canada (0.14)
- Genre:
- Research Report (0.64)
- Industry:
- Health & Medicine > Therapeutic Area > Oncology (0.93)
- Technology: